How to stack same RNN for every layer? - keras

I would like to know how to stack many layers of RNN but every layer are the same RNN. I want every layer share the same weight. I have read stack LSTM and RNN, but I found that each layer was not the same.
1 layer code:
inputs = keras.Input(shape=(maxlen,), batch_size = batch_size)
Emb_layer = layers.Embedding(max_features,word_dim)
Emb_output = Emb_layer(inputs)
first_layer = layers.SimpleRNN(n_hidden,use_bias=True,return_sequences=False,stateful =False)
first_layer_output = first_layer(Emb_output)
dense_layer = layers.Dense(1, activation='sigmoid')
dense_output = dense_layer(first_layer_output )
model = keras.Model(inputs=inputs, outputs=dense_output)
model.summary()
enter image description here
RNN 1 layer
inputs = keras.Input(shape=(maxlen,), batch_size = batch_size)
Emb_layer = layers.Embedding(max_features,word_dim)
Emb_output = Emb_layer(inputs)
first_layer = layers.SimpleRNN(n_hidden,use_bias=True,return_sequences=True,stateful =True)
first_layer_output = first_layer(Emb_output)
first_layer_state = first_layer.states
second_layer = layers.SimpleRNN(n_hidden,use_bias=True,return_sequences=False,stateful =False)
second_layer_set_state = second_layer(first_layer_output, initial_state=first_layer_state)
dense_layer = layers.Dense(1, activation='sigmoid')
dense_output = dense_layer(second_layer_set_state )
model = keras.Model(inputs=inputs, outputs=dense_output)
model.summary()
enter image description here
Stack RNN 2 layer.
For example, I want to build two layers RNN, but the first layer and the second must have the same weight, such that when I update the weight in the first layer the second layer must be updated and share the same value. As far as I know, TF has RNN.state. It returns the value from the previous layer. However, when I use this, it seems that each layer is treated independently. The 2-layer RNN that I want should have trainable parameters equal to the 1-layer since they shared the same weight, but this did not work.

You can view the layer object as a container for the weights that knows how to apply the weights. You can use the layer object as many times as you want. Assuming the embedding and the RNN dimension are the same, you can do:
states = Emb_layer(inputs)
first_layer = layers.SimpleRNN(n_hidden, use_bias=True, return_sequences=True)
for _ in range(10):
states = first_layer(states)
There is no reason to set stateful to true. This is used when you split long sequences into multiple batches and what the RNN to remember the state between batches, so you do not have yo manually set initial states. You can get the final state of the RNN (that you wany you want to use for classification) by simply indexing the last position from states.

Related

Keras, how to feed an Embedding layer with a random sampling of a Softmax layer

In the model I am constructing, I have the following layer:
y = layers.Dense(10, activation="softmax")(x)
And I want the next layer of this model to be an Embedding layer that "represent" the choice made by the Dense layer.
I.e, I want
to sample a choice from y (based on the probability "represented" by the values of the softmax)
to turn this choice into an Embedding Layer with vocabulary size 10.
Any idea how to do this ?
Regards
Initial answer
Add a layer that takes the argmax of the output of the dense layer before feeding it into the embedding layer to propagate the most likely category label:
import tensorflow as tf
from keras import backend as K
# generate some data
BATCH_SIZE,INPUT_DIM = (4,2)
x = tf.random.uniform([BATCH_SIZE,INPUT_DIM])
# model
NUM_CLASSES = 10
EMBEDDING_DIM = 10
dense = tf.keras.layers.Dense(NUM_CLASSES,activation='softmax')(x)
argmax = tf.keras.layers.Lambda(lambda x: K.argmax(x,axis=-1))(dense)
emb = tf.keras.layers.Embedding(NUM_CLASSES,EMBEDDING_DIM)(argmax)
Updated answer
If you want to propagate a randomly sampled category label instead of the most likely category label, you can do so by using tf.random.categorical. Note that tf.random.categorical takes logits as inputs, so you don't need the softmax activation at the end of the dense layer.
NUM_CLASSES = 10
EMBEDDING_DIM = 10
logits = tf.keras.layers.Dense(NUM_CLASSES)(x)
sample = tf.keras.layers.Lambda(lambda logits: tf.squeeze(tf.random.categorical(logits, 1)))(logits)
emb = tf.keras.layers.Embedding(NUM_CLASSES,EMBEDDING_DIM)(sample)

Add units (Neurons) to an existing model in Keras

I have trained a model and I want to add more units to it's hidden units and train it for some more epochs. I am implementing a constructive learning algorithm. How can I add neuron to an existing model hidden layer ? And also is there a way to only train the added units parameters and other parameters get freezed ? (In KERAS)
def create_first_sub_NN(X):
sub_input = tf.keras.Input(shape=(X.shape[1],))
h = Dense(1, activation="sigmoid",name="hidden")(sub_input)
h = tf.keras.Model(inputs=sub_input, outputs=h)
m_combined = tf.keras.layers.concatenate([h.input, h.output])
out = Dense(1, activation="relu")(m_combined)
out = tf.keras.Model(inputs=sub_input, outputs=out)
return out
def train_current_model(model,input_groups,Y,error_thr):
opt = keras.optimizers.Adam(learning_rate=0.01)
callbacks = stopAtLossValue()
# overfitCallback = EarlyStopping(monitor='loss', min_delta=5,
patience=10) # if for 10 epochs the error did not decreased more than 5, then stop the current network training
model.compile(optimizer=opt, loss='mean_absolute_error')
model.fit(input_groups, train_label, epochs=100, batch_size=32,callbacks=[callbacks])
enter code here
model = create_first_sub_NN(X1_train)
keras.utils.plot_model(model, "first.png",show_shapes=True)
print(model.summary())
list_of_inputs = [sub_X_list[0]]
train_current_model(model, list_of_inputs, train_label, 0.1)
# how to add number of units in my hidden layer for the
enter code here
I want to add neuron to my hidden layer repetitively, until my network error gets below the threshold.
I solved the problem. Instead of adding a neuron to the current layer, We can add another Dense layer which is connected to the next and previous layer and then concatenate the new layer with the old one.

Create unsupervised embedding model in keras?

I want to create an autoencoder with the following architecture:
path_source_token_input = Input(shape=(MAX_CONTEXTS,), dtype=tf.int32, name='source_token_input')
path_input = Input(shape=(MAX_CONTEXTS,), dtype=tf.int32, name='path_input')
path_target_token_input = Input(shape=(MAX_CONTEXTS,), dtype=tf.int32, name='target_token_input')
paths_embedded = Embedding(PATH_SIZE, DEFAULT_EMBEDDINGS_SIZE, name='path_embedding')(path_input)
token_embedding_shared_layer = Embedding(TOKEN_SIZE, DEFAULT_EMBEDDINGS_SIZE, name='token_embedding')
path_source_token_embedded = token_embedding_shared_layer(path_source_token_input)
path_target_token_embedded = token_embedding_shared_layer(path_target_token_input)
context_embedded = Concatenate()([path_source_token_embedded, paths_embedded, path_target_token_embedded]) # --> this up to now, is the output of the STANDALONE embedding model
-------- SPLIT HERE? ------
context_after_dense = TimeDistributed(Dense(CODE_VECTOR_SIZE, use_bias=False, activation='tanh'))(context_embedded) # in short, this layer probably has to stay
encoded = LSTM(100, activation='relu', input_shape=context_after_dense.shape)(context_after_dense)
decoded = RepeatVector(MAX_CONTEXTS)(encoded)
decoded = LSTM(100, activation='relu', return_sequences=True)(decoded)
result = TimeDistributed(Dense(1), name='PROBLEM_is_here')(decoded) # this seems to be some trick according to https://github.com/keras-team/keras/issues/10753, so probably don't remove
inputs = (path_source_token_input, path_input, path_target_token_input)
model = tf.keras.Model(inputs=inputs, outputs=result)
So far I have learned that it is impossible to implement an inversed embedding layer in the decoder, so naturally my conclusion is to split my network in two: one to generate concatenated embeddings of input, and the second part would be the autoencoder itself with input as concatenated embeddings (output of first part). Now my question is, is it possible to create an unsupervised embedding model in keras? or anywhere else for that matter. My data is unlabeled and the point of my final neural network would be to create clusters of said unlabeled data.

Get feature vectors from BertForSequenceClassification

I have successfully build a sentiment analysis tool with BertForSequenceClassification from huggingface/transformers to classify $tsla tweets as positive or negative.
However, I can't find out how I can obtain the feature vectors per tweet (more specifically the embedding of [CLS]) from my finetuned model.
more info of used model:
model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, num_labels=num_labels)
model.config.output_hidden_states = True
tokenizer = BertTokenizer(OUTPUT_DIR+'vocab.txt')
However, when I run the code below the output variable only consists of the logits.
model.eval()
eval_loss = 0
nb_eval_steps = 0
preds = []
for input_ids, input_mask, segment_ids, label_ids in tqdm_notebook(eval_dataloader, desc="Evaluating"):
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
output = model(input_ids,token_type_ids= segment_ids,attention_mask= input_mask)
I also have this problem after fine-tuning BertForSequenceClassification. I know your purpose is to get the hidden state of [CLS] as the representation of each tweet. Right? As the instruction of API document, I think the code is:
model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, output_hidden_states=True)
logits, hidden_states = model(input_ids, attn_masks)
cls_hidden_state = hidden_states[-1][:, 0, :] # the first hidden state in last layer
or
model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, output_hidden_states=True)
last_hidden_states = model.bert(input_ids, attn_masks)[0]
cls_hidden_state = last_hidden_states[:, 0, :]
BertForSequenceClassification is a wrapper that consists of two parts: BERT model (attribute bert) and a classifier (attribute classifier).
You can call directly the underling BERT model. If you pass your input directly to it, you will get the hidden states. It returns a tuple: the first member of the tuple are all hidden states, the second one is the [CLS] vector.

Is it possible to train using same model with two inputs?

Hello I have a some question for keras.
currently i want implement some network
using same cnn model, and use two images as input of cnn model
and use two result of cnn model, provide to Dense model
for example
def cnn_model():
input = Input(shape=(None, None, 3))
x = Conv2D(8, (3, 3), strides=(1, 1))(input)
x = GlobalAvgPool2D()(x)
model = Model(input, x)
return model
def fc_model(cnn1, cnn2):
input_1 = cnn1.output
input_2 = cnn2.output
input = concatenate([input_1, input_2])
x = Dense(1, input_shape=(None, 16))(input)
x = Activation('sigmoid')(x)
model = Model([cnn1.input, cnn2.input], x)
return model
def main():
cnn1 = cnn_model()
cnn2 = cnn_model()
model = fc_model(cnn1, cnn2)
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x=[image1, image2], y=[1.0, 1.0], batch_size=1, ecpochs=1)
i want to implement model something like this, and train models
but i got error message like below :
'All layer names should be unique'
Actually i want use only one CNN model as feature extractor and finally use two features to predict one float value as 0.0 ~ 1.0
so whole system -->>
using two images and extract features from same CNN model, and features are provided to Dense model to get one floating value
Please, help me implement this system and how to train..
Thank you
See the section of the Keras documentation on shared layers:
https://keras.io/getting-started/functional-api-guide/
A code snippet from the documentation above demonstrating this:
# This layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)
# When we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)
# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)
# We define a trainable model linking the
# tweet inputs to the predictions
model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([data_a, data_b], labels, epochs=10)

Resources