Expected Randomness does not occur in Tensorflow Layer - tensorflow2.x

I wrote a custom layer that shuffle the input. When I try to test the layer, said shuffling does not occur. Here is my minimal noise layer below:
class ShuffleLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(ShuffleLayer, self).__init__(**kwargs)
def call(self, inputs, training=None):
if training:
shuffled = tf.stop_gradient(tf.random.shuffle(inputs))
return shuffled
return inputs
When I test the layer, the layer will not shuffle
SL = ShuffleLayer()
x = tf.reshape(tf.range(0,10, dtype=tf.float32), (5,2))
y = SL(x)
print(x.numpy())
print(y.numpy())
[[0. 1.] [2. 3.] [4. 5.] [6. 7.] [8. 9.]]
[[0. 1.] [2. 3.] [4. 5.] [6. 7.][8. 9.]]
Why will the expected behavior not occur?

Looking at the layer call, we see that the layer does nothing if training is None. When the layer is called as y = SL(x), it sees that training is None and returns the inputs. Getting the shuffled output is done by turning on the training parameter:
y = SL(x, training=True)
print(x.numpy())
print(y.numpy())
[[0. 1.][2. 3.][4. 5.][6. 7.][8. 9.]]
[[0. 1.][6. 7.][2. 3.][8. 9.][4. 5.]]

Related

Doc-Classification (Pytorch, Bert), how to change the training/validation loop to work for multilabel case

I am trying to make BertForSequenceClassification.from_pretrained() work for multilabel. Since the code I found online is for binary label case.
I have document classification with 12 labels. Using Bert Language model as pytorch model.
what should I do to make it work for multilabel. I get this error, when I run it initially without changing the train/val loop
ValueError: Target size (torch.Size([32])) must be the same as input size (torch.Size([32, 12]))
I assume I have to change the input since the target is [32,12]. But how to do this?
Edit: Full output
======== Epoch 1 / 4 ========
Training...
torch.Size([32, 64])
tensor([[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1]], device='cuda:0')
tensor([ 9., 9., 3., 8., 9., 10., 4., 3., 4., 4., 9., 0., 9., 9.,
11., 3., 9., 9., 3., 4., 4., 7., 8., 9., 10., 6., 4., 0.,
10., 3., 4., 1.], dtype=torch.float64)
ValueError Traceback (most recent call last)
<ipython-input-25-ac7a3b802ac2> in <module>
90 # Specifically, we'll get the loss (because we provided labels) and the
91 # "logits"--the model outputs prior to activation.
---> 92 result = model(b_input_ids,
93 token_type_ids=None,
94 attention_mask=b_input_mask, 4 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
3158 3159 if not (target.size() == input.size()):
-> 3160 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) 3161 3162 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
ValueError: Target size (torch.Size([32])) must be the same as input size (torch.Size([32, 12]))
the code:
from transformers import BertForSequenceClassification, AdamW, BertConfig
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
# You can increase this for multi-class tasks.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
# Tell pytorch to run this model on the GPU.
model.cuda()
optimizer = AdamW(model.parameters(),
lr = 2e-5, # args.learning_rate - default is 5e-5, our notebook had 2e-5
eps = 1e-8 # args.adam_epsilon - default is 1e-8.
)
from transformers import get_linear_schedule_with_warmup
total_steps = len(train_dataloader) * epochs
# Create the learning rate scheduler.
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0, # Default value in run_glue.py
num_training_steps = total_steps)
import random
import numpy as np
# This training code is based on the `run_glue.py` script here:
# https://github.com/huggingface/transformer/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128
# Set the seed value all over the place to make this reproducible.
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# We'll store a number of quantities such as training and validation loss,
# validation accuracy, and timings.
training_stats = []
# Measure the total training time for the whole run.
total_t0 = time.time()
# For each epoch...
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
# Perform one full pass over the training set.
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_train_loss = 0
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Progress update every 40 batches.
if step % 40 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using the
# `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Always clear any previously calculated gradients before performing a
# backward pass. PyTorch doesn't do this automatically because
# accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# In PyTorch, calling `model` will in turn call the model's `forward`
# function and pass down the arguments. The `forward` function is
# documented here:
# https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification
# The results are returned in a results object, documented here:
# https://huggingface.co/transformers/main_classes/output.html#transformers.modeling_outputs.SequenceClassifierOutput
# Specifically, we'll get the loss (because we provided labels) and the
# "logits"--the model outputs prior to activation.
result = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels,
return_dict=True)
loss = result.loss
logits = result.logits
# Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
total_train_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Calculate the average loss over all of the batches.
avg_train_loss = total_train_loss / len(train_dataloader)
# Measure how long this epoch took.
training_time = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(training_time))
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
total_eval_accuracy = 0
total_eval_loss = 0
nb_eval_steps = 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using
# the `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Tell pytorch not to bother with constructing the compute graph during
# the forward pass, since this is only needed for backprop (training).
with torch.no_grad():
# Forward pass, calculate logit predictions.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
result = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels,
return_dict=True)
# Get the loss and "logits" output by the model. The "logits" are the
# output values prior to applying an activation function like the
# softmax.
loss = result.loss
logits = result.logits
# Accumulate the validation loss.
total_eval_loss += loss.item()
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Calculate the accuracy for this batch of test sentences, and
# accumulate it over all batches.
total_eval_accuracy += flat_accuracy(logits, label_ids)
# Report the final accuracy for this validation run.
avg_val_accuracy = total_eval_accuracy / len(validation_dataloader)
print(" Accuracy: {0:.2f}".format(avg_val_accuracy))
# Calculate the average loss over all of the batches.
avg_val_loss = total_eval_loss / len(validation_dataloader)
# Measure how long the validation run took.
validation_time = format_time(time.time() - t0)
print(" Validation Loss: {0:.2f}".format(avg_val_loss))
print(" Validation took: {:}".format(validation_time))
# Record all statistics from this epoch.
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Valid. Accur.': avg_val_accuracy,
'Training Time': training_time,
'Validation Time': validation_time
}
)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
I'm not well-versed in this but I guess this would help. In the code you have posted, you haven't changed the num_labels=12, it is only 2. if you have 12 classes, then maybe you need to change it right? Let me know if it works. Also, could you share the answer to the previously posted question in calculating average word embedding Glove? I also want to learn how to implement it.

Custom tensorflow model unable to process inputs (should be string) properly DURING TRAINING

I'm building a custom model in tensorflow with a custom layer (fasttext embedding layer) via subclassing and I've got the current setup here so far:
(p.s. the simple_preprocess function is simply imported from gensim library -> from gensim.utils import simple_preprocess)
class FastTextEmbedding(tf.keras.layers.Layer):
def __init__(self, trained_ft_model_dir:str) -> None:
super(FastTextEmbedding, self).__init__()
self.trained_ft_model = FastText.load(trained_ft_model_dir)
# INPUT IS A STRING : "Hey my name is Anas"
def call(self, input):
# print(type(input))
# input = tf.constant(input)
# assert type(input) == str
out = np.zeros(shape=(1,60))
sent_tokenized = simple_preprocess(input)
# we'll use mean pooling
for word in sent_tokenized:
out[:,:] += self.trained_ft_model.wv[word]
print(len(sent_tokenized))
return tf.convert_to_tensor(out / len(sent_tokenized))
# return tf.convert_to_tensor(out / len(sent_tokenized))
# def call(self, input):
# # print(type(input))
# # input = tf.constant(input)
# # assert type(input) == str
# out = np.zeros(shape=(60,))
# sent_tokenized = simple_preprocess(input)
# # we'll use mean pooling
# for word in sent_tokenized:
# out += self.trained_ft_model.wv[word]
# print(len(sent_tokenized))
# return tf.expand_dims(tf.convert_to_tensor(out / len(sent_tokenized)), axis=1)
# # return tf.convert_to_tensor(out / len(sent_tokenized))
class FastTextModel(tf.keras.Model):
def __init__(self, trained_ft_model_dir, num_classes:int=16) -> None:
super(FastTextModel, self).__init__(name='FT')
self.fasttext_embeddings = FastTextEmbedding(trained_ft_model_dir)
self.relu = tf.keras.layers.Activation('relu')
self.softmax = tf.keras.layers.Activation('softmax')
self.dense1 = tf.keras.layers.Dense(units=60)
self.do = tf.keras.layers.Dropout(rate=0.35)
self.dense2 = tf.keras.layers.Dense(units=num_classes)
self.bn = tf.keras.layers.BatchNormalization()
def call(self, input):
x = self.fasttext_embeddings(input,training=False)
x = self.dense1(x, training=True)
print(x.shape)
x = self.bn(x, training=True)
print(x.shape)
x = self.relu(x)
x = self.do(x, training=False)
print(x.shape)
x = self.dense2(x, training=True)
print(x.shape)
return self.softmax(x)
So I've made a custom layer FastTextEmbedding and built that layer into my model FastTextModel
My issue is that when I go to build the model via the following line:
_ = model.build("Hey my name is Anas")
it builds successfully with the current setup I've got. FYI the model is intended to take in string and get the fasttext embeddings for the words, combine via mean pooling and pass that through some layers to make a decision. Now when I go to train my model I get this weird error:
output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[150], line 1
----> 1 history = model.fit(X_train,
2 y_train,
3 batch_size=128,
4 validation_data=(X_val, y_val),
5 validation_batch_size=128,
6 epochs=15)
File ~/Desktop/Winterproj/emojify/lib/python3.9/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /var/folders/07/1yqf9lq93hb3l2f96cqmmv9c0000gn/T/__autograph_generated_filerg_dksj0.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
Call arguments received by layer 'FT' (type FastTextModel):
• input=tf.Tensor(shape=(None,), dtype=string)
My training data is of this format:
Training data (X_train, a np array of string tweets):
['chick fil hicksville finally open'
'accidental twinning with my favorites'
'one of my favorite people in this world love you mrs andrews' ...
'of your fav artists takin on hoco' 'welcome to the studio' 'the boys']
shape of training data: (204073,)
Labels for data(categorically encoded, there are 16 total classes):
[[0. 0. 0. ... 0. 0. 0.]
[1. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
shape of labels for training: (204073, 16)
I tried expanding the dimensions for my call method in my custom layer, that worked to get past a few problems but still I'm stuck here because I need the model to train as it takes in all the inputs and use batches of like 128 or so to train the model.
At the end I want to be able to take a single line of text such as "Hey my name is Anas" and apply my model to it to output one of the 16 classes (they are emoji labels).
This is my first time building out a model like this, I've been stuck for some time now and would appreciate any suggestions / advice. Thanks

Bi-LSTM with Keras : dimensions must be equal but are 7 and 300

I am creating for the first time a bilstm with keras but I am having difficulties. So that you understand, here are the steps I have done:
I created an embedding matrix with Glove for my x ;
def create_embeddings(fichier,dictionnaire,dictionnaire_tokens):
with open(fichier) as file:
line = file.readline()
max_words = max(dictionnaire_tokens.values())+1 #1032
max_size_dimensions = 300
emb_matrix = np.zeros((max_words,max_size_dimensions))
for item,count in dictionnaire_tokens.items():
try:
vecteur = dictionnaire[item]
except:
pass
if vecteur is not None:
emb_matrix[count]= vecteur
return emb_matrix
I did some one hot encoding with my y's;
def one_hot_encoding(file):
with open(file) as file:
line = file.readline()
liste = []
while line:
tag = line.split(" ")[1]
tag = [tag]
line = file.readline()
liste.append(tag)
one_hot = MultiLabelBinarizer()
array = one_hot.fit_transform(liste)
return array
I compiled my model with keras
from tensorflow.keras.layers import Bidirectional
model = Sequential()
embedding_layer = Embedding(input_dim=1031 + 1,
output_dim=300,
weights=[embedding_matrix],
trainable=False)
model.add(embedding_layer)
bilstm_layer = Bidirectional(LSTM(units=300, return_sequences=True))
model.add(bilstm_layer)
model.add(Dense(300, activation="relu"))
#crf_layer = CRF(units=len(self.tags), sparse_target=True)
#model.add(crf_layer)
model.compile(optimizer="adam", loss='binary_crossentropy', metrics='acc')
model.summary()
Input of my embedding layer (embedding matrix) :
[[ 0. 0. 0. ... 0. 0. 0. ]
[ 0. 0. 0. ... 0. 0. 0. ]
[ 0. 0. 0. ... 0. 0. 0. ]
...
[-0.068577 -0.71314 0.3898 ... -0.077923 -1.0469 0.56874 ]
[ 0.32461 0.50463 0.72544 ... 0.17634 -0.28961 0.29007 ]
[-0.33771 -0.24912 -0.032685 ... -0.033254 -0.45513 -0.13319 ]]
I train my model. However when I want to train it, I get the following message: ValueError: Dimensions must be equal, but are 7 and 300 for '{{node binary_crossentropy/mul}} = Mul[T=DT_FLOAT](binary_crossentropy/Cast, binary_crossentropy/Log)' with input shapes: [?,7], [?,300,300].
My embedding matrix was made with glove 300d so it has 300 dimensions. While my labels, I have only 7 labels. So I have to make my x and y have the same dimensions but how? Thank you!!!
keras.backend.clear_session()
from tensorflow.keras.layers import Bidirectional
model = Sequential()
_input = keras.layers.Input(shape=(300,1))
model.add(_input)
bilstm_layer = Bidirectional(LSTM(units=300, return_sequences=False))
model.add(bilstm_layer)
model.add(Dense(7, activation="relu")) #here 7 is the number of classes you have and None is the batch_size
#crf_layer = CRF(units=len(self.tags), sparse_target=True)
#model.add(crf_layer)
model.compile(optimizer="adam", loss='binary_crossentropy', metrics='acc')
model.summary()

wandb pytorch: top1 accuracy per class

I have 5 classes in validation set and i want to draw a graph based on top1 results per class in validation loop using wandb . I have tried a single accuracy graph based on the average of 5 classes and it works fine but i want to do a separate way like top1 accuracy for each class. I am unable to achieve, are there any way to achieve it?
Validation Loader
val_loaders = []
for nuisance in val_nuisances:
val_loaders.append((nuisance, torch.utils.data.DataLoader(
datasets.ImageFolder(os.path.join(valdir, nuisance), transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True,
)))
val_nuisances = ['shape', 'pose', 'texture', 'context', 'weather']
Validation Loop
def validate(val_loaders, model, criterion, args):
overall_top1 = 0
for nuisance, val_loader in val_loaders:
batch_time = AverageMeter('Time', ':6.3f', Summary.NONE)
losses = AverageMeter('Loss', ':.4e', Summary.NONE)
top1 = AverageMeter('Acc#1', ':6.2f', Summary.AVERAGE)
top5 = AverageMeter('Acc#5', ':6.2f', Summary.AVERAGE)
progress = ProgressMeter(
len(val_loader),
[batch_time, losses, top1, top5],
prefix=f'Test {nuisance}: ')
# switch to evaluate mode
model.eval()
with torch.no_grad():
end = time.time()
for i, (images, target) in enumerate(val_loader):
if args.gpu is not None:
images = images.cuda(args.gpu, non_blocking=True)
if torch.cuda.is_available():
target = target.cuda(args.gpu, non_blocking=True)
# compute output
output = model(images)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), images.size(0))
top1.update(acc1[0], images.size(0))
top5.update(acc5[0], images.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
progress.display(i)
progress.display_summary()
overall_top1 += top1.avg
overall_top1 /= len(val_loaders)
return top1.avg
I don't see any log to W&B in your code, but logging the top1 accuracy per class would just be
class_names = ['shape', 'pose', 'texture', 'context', 'weather']
top1_accuracies = [0.9, 0.8, 0.9, 0.9, 0.8]
wandb.log({class_names[0]: top1_accuracies[0], class_names[1]: top1_accuracies[1], ...}
In the above example, it looks like you're not actually creating a variable for the top1 accuracy of each class. You'll want to do that first. Taken from https://stackoverflow.com/a/50977153/3959708
You can use sklearn's confusion matrix to get the accuracy
from sklearn.metrics import confusion_matrix
import numpy as np
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
#Get the confusion matrix
cm = confusion_matrix(y_true, y_pred)
#array([[1, 0, 0],
# [1, 0, 0],
# [0, 1, 2]])
#Now the normalize the diagonal entries
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#array([[1. , 0. , 0. ],
# [1. , 0. , 0. ],
# [0. , 0.33333333, 0.66666667]])
#The diagonal entries are the accuracies of each class
cm.diagonal()
#array([1. , 0. , 0.66666667])

Multiple propositions for multiple class prediction

I am working on a word prediction problem. I have examples of career path, and I would like to be able to predict a next person's job using their last 2 jobs. I have built a LSTM model to perform it
I have a problem when intenting to get multiple results from keras model.predict_classes function. It only returns 1 result. I would like to get multiple results, ordered by their probability.
Here is the code :
from numpy import array
from keras.preprocessing.text import Tokenizer
from keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import Embedding
# generate a sequence from a language model
def generate_seq(model, tokenizer, max_length, seed_text, n_words):
in_text = seed_text
# generate a fixed number of words
for _ in range(n_words):
# encode the text as integer
encoded = tokenizer.texts_to_sequences([in_text])[0]
# pre-pad sequences to a fixed length
encoded = pad_sequences([encoded], maxlen=max_length, padding='pre')
# predict probabilities for each word
yhat = model.predict_classes(encoded, verbose=1)
print('yhat = ' + yhat)
#print('yhat : ' + str(yhat))
# map predicted word index to word
out_word = ''
for word, index in tokenizer.word_index.items():
if index == yhat:
out_word = word
break
# append to input
in_text += ' ' + out_word
return in_text
# source text
data = """apprenti electricien chefOdeOprojet \n
soudeur chefOdeOsection directeurOusine\n
mecanicien chefOdeOsection directeurOadjoint\n
ingenieur chefOdeOprojet directeurOadjoint directeurOusine\n
ingenieur chefOdeOprojet \n
apprenti soudeur chefOdeOsection chefOdeOprojet\n
ingenieurOetude chefOdeOprojet\n
ingenieurOetude manager chefOdeOprojet directeurOdepartement\n
apprenti gestionOproduction manager directeurOdepartement\n
ingenieurOetude commercial\n
soudeur ingenieurOetude manager directeurOadjoint\n
ingenieurOetude directeurOdepartement directeurOusine\n
apprenti soudeur\n
agentOsecurite chefOsecurite\n
apprenti mecanicien ouvrier manager\n
commercial directeurOadjoint\n
agentOsecurite chefOsecurite\n
directeurOusine retraite\n
ouvrier manager\n
ingenieur vente\n
secretaire comptable\n
comptable chefOcomptable\n
chefOcomptable directeurOdepartement\n
assistant secretaire comptable\n
assistant comptable\n
assistant secretaire commercial\n
commercial chefOdeOprojet\n
commercial vente chefOdeOprojet\n
electricien chefOdeOsection\n
apprenti ouvrier chefOdeOsection\n"""
# integer encode sequences of words
tokenizer = Tokenizer()
tokenizer.fit_on_texts([data])
encoded = tokenizer.texts_to_sequences([data])[0]
# retrieve vocabulary size
vocab_size = len(tokenizer.word_index) + 1
print('Vocabulary Size: %d' % vocab_size)
# encode 2 words -> 1 word
sequences = list()
for line in data.split('\n'):
encoded = tokenizer.texts_to_sequences([line])[0]
for i in range(2, len(encoded)):
sequence = encoded[i-2:i+1]
sequences.append(sequence)
print('Total Sequences: %d' % len(sequences))
# pad sequences
max_length = max([len(seq) for seq in sequences])
sequences = pad_sequences(sequences, maxlen=max_length, padding='pre')
print('Max Sequence Length: %d' % max_length)
# split into input and output elements
sequences = array(sequences)
X, y = sequences[:,:-1],sequences[:,-1]
y = to_categorical(y, num_classes=vocab_size)
# define model
model = Sequential()
model.add(Embedding(vocab_size, 10, input_length=max_length-1))
model.add(LSTM(50))
model.add(Dropout(0.2))
#model.add(Dense(units = 3, activation = 'relu'))
model.add(Dense(vocab_size, activation='softmax'))
print(model.summary())
# compile network
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(X, y, epochs=500, verbose=0)
# evaluate model
print(generate_seq(model, tokenizer, max_length-1, 'electricien secretaire', 1))
and there is the console display:
Vocabulary Size: 24
Total Sequences: 20
Max Sequence Length: 3
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, 2, 10) 240
_________________________________________________________________
lstm_2 (LSTM) (None, 50) 12200
_________________________________________________________________
dropout_2 (Dropout) (None, 50) 0
_________________________________________________________________
dense_2 (Dense) (None, 24) 1224
=================================================================
Total params: 13,664
Trainable params: 13,664
Non-trainable params: 0
_________________________________________________________________
None
1/1 [==============================] - 0s 86ms/step
yhat = [1]
electricien secretaire chefodeoprojet
If I understand your question correctly, you would like to see the probabilities associated with each class of a multi-classification problem?
The code looks pretty correct to me, but I would recommend trying a different evaluation step. I have gotten multi-classifier outputs with the following snippet:
# Fit the model
print "Fitting model..."
model.fit(np.asarray(self.X), np.asarray(self.Y), epochs=200, batch_size=10)
print "Model fitting complete."
self.TEST = np.asarray(self.TEST).reshape(( test_data.shape[0], 1, 128))
print "Predicting on Test (unseen) data..."
predictions = model.predict(self.TEST)
# Sigmoid predictions
labels = np.zeros(predictions.shape)
labels[predictions>0.5] = 1
print "Prediction labels for unseen: " + str(labels)
The output:
Prediction labels for unseen:
[[ 0. 1. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 1. 0.]
[ 0. 0. 1. 0.]
[ 0. 0. 1. 0.]]
Each row denotes the classification of one sample; the index of the 1 represents which class (A,B,C,D) the sample fell into.

Resources