I want to use facebook/bart-large-mnli model for NLI task.
I have dataset with premises and hypothesis columns and labels [0,1,2].
How can I use this model for that NLI task ?
I wrote the following code:
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
nli_model.to(device)
i = 0 # first examle check
premise = tokenized_datasets['TRAIN'][i]['premise']
hypothesis = tokenized_datasets['TRAIN'][i]['hypothesis']
x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
probs
and I got only 2 values: tensor([[8.8793e-05, 9.9991e-01]], device='cuda:0', grad_fn=<SoftmaxBackward0>) (instead of 3 values - contradiction, neutral, entailment)
How can I use this model for NLI (predict the right value from 3 labels) ?
This code line:
entail_contradiction_logits = logits[:,[0,2]]
selects the first and third element of the logits tensor (i.e. you are removing the logits for neutral). Just use the variable logits as it is to get probabilities for all 3 values.
Related
I have successfully build a sentiment analysis tool with BertForSequenceClassification from huggingface/transformers to classify $tsla tweets as positive or negative.
However, I can't find out how I can obtain the feature vectors per tweet (more specifically the embedding of [CLS]) from my finetuned model.
more info of used model:
model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, num_labels=num_labels)
model.config.output_hidden_states = True
tokenizer = BertTokenizer(OUTPUT_DIR+'vocab.txt')
However, when I run the code below the output variable only consists of the logits.
model.eval()
eval_loss = 0
nb_eval_steps = 0
preds = []
for input_ids, input_mask, segment_ids, label_ids in tqdm_notebook(eval_dataloader, desc="Evaluating"):
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
output = model(input_ids,token_type_ids= segment_ids,attention_mask= input_mask)
I also have this problem after fine-tuning BertForSequenceClassification. I know your purpose is to get the hidden state of [CLS] as the representation of each tweet. Right? As the instruction of API document, I think the code is:
model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, output_hidden_states=True)
logits, hidden_states = model(input_ids, attn_masks)
cls_hidden_state = hidden_states[-1][:, 0, :] # the first hidden state in last layer
or
model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, output_hidden_states=True)
last_hidden_states = model.bert(input_ids, attn_masks)[0]
cls_hidden_state = last_hidden_states[:, 0, :]
BertForSequenceClassification is a wrapper that consists of two parts: BERT model (attribute bert) and a classifier (attribute classifier).
You can call directly the underling BERT model. If you pass your input directly to it, you will get the hidden states. It returns a tuple: the first member of the tuple are all hidden states, the second one is the [CLS] vector.
I have tried to go in deep with my understanding of word embedding and NLP in keras implementing and copying part of the code creating a Keras model using functional API. When I launch model.summary I receive an Attribute error: None type has no attribute 'summary'.
After many attempts decreasing the numbers of layers, the dimension of word embedding matrix unfortunately nothing changed. I don't know what to do.
def pretrained_embedding_layer(word_to_vec, word_to_index):
vocab_len = len(word_to_index) + 1
emb_dim = word_to_vec["sole"].shape[0]
emb_matrix = np.zeros((vocab_len,emb_dim))
for word, index in word_to_index.items():
emb_matrix[index, :] = word_to_vec[word]
print(emb_matrix.shape)
embedding_layer = Embedding(vocab_len,emb_dim,trainable =False)
embedding_layer.build((None,))
embedding_layer.set_weights([emb_matrix])
return embedding_layer
def Chatbot_V1(input_shape, word_to_vec, word_to_index):
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = Input(input_shape, dtype='int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec, word_to_index)
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
X = LSTM(128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences=True)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of vocab_dim dimensional vectors.
X = Dense(vocab_dim)(X)
# Add a softmax activation
preds = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(sentence_indices, preds)
model = Chatbot_V1((maxLen,), word_to_vec, word_to_index)
model.summary()
Launching model.summary:
AttributeError: 'NoneType' object has no attribute 'summary'
Why? What is wrong in layers definition?
The function Chatbot_V1 does not return anything, and in python this is signaled by None if you assign the return value of the function to a variable. So just use the return keyword to return the model at the end of Chatbot_V1
I am trying to reproduce the nice work here and adapte it so that it reads real data from a file.
I started by generating random signals (instead of the generating methods provided in the above link). Unfortoutanyl, I could not generate the proper signals that the model can accept.
here is the code:
import numpy as np
import keras
from keras.utils import plot_model
input_sequence_length = 15 # Length of the sequence used by the encoder
target_sequence_length = 15 # Length of the sequence predicted by the decoder
import random
def getModel():# Define an input sequence.
learning_rate = 0.01
num_input_features = 1
lambda_regulariser = 0.000001 # Will not be used if regulariser is None
regulariser = None # Possible regulariser: keras.regularizers.l2(lambda_regulariser)
layers = [35, 35]
num_output_features=1
decay = 0 # Learning rate decay
loss = "mse" # Other loss functions are possible, see Keras documentation.
optimiser = keras.optimizers.Adam(lr=learning_rate, decay=decay) # Other possible optimiser "sgd" (Stochastic Gradient Descent)
encoder_inputs = keras.layers.Input(shape=(None, num_input_features))
# Create a list of RNN Cells, these are then concatenated into a single layer
# with the RNN layer.
encoder_cells = []
for hidden_neurons in layers:
encoder_cells.append(keras.layers.GRUCell(hidden_neurons, kernel_regularizer=regulariser,recurrent_regularizer=regulariser,bias_regularizer=regulariser))
encoder = keras.layers.RNN(encoder_cells, return_state=True)
encoder_outputs_and_states = encoder(encoder_inputs)
# Discard encoder outputs and only keep the states.
# The outputs are of no interest to us, the encoder's
# job is to create a state describing the input sequence.
encoder_states = encoder_outputs_and_states[1:]
# The decoder input will be set to zero (see random_sine function of the utils module).
# Do not worry about the input size being 1, I will explain that in the next cell.
decoder_inputs = keras.layers.Input(shape=(None, 1))
decoder_cells = []
for hidden_neurons in layers:
decoder_cells.append(keras.layers.GRUCell(hidden_neurons,
kernel_regularizer=regulariser,
recurrent_regularizer=regulariser,
bias_regularizer=regulariser))
decoder = keras.layers.RNN(decoder_cells, return_sequences=True, return_state=True)
# Set the initial state of the decoder to be the ouput state of the encoder.
# This is the fundamental part of the encoder-decoder.
decoder_outputs_and_states = decoder(decoder_inputs, initial_state=encoder_states)
# Only select the output of the decoder (not the states)
decoder_outputs = decoder_outputs_and_states[0]
# Apply a dense layer with linear activation to set output to correct dimension
# and scale (tanh is default activation for GRU in Keras, our output sine function can be larger then 1)
decoder_dense = keras.layers.Dense(num_output_features,
activation='linear',
kernel_regularizer=regulariser,
bias_regularizer=regulariser)
decoder_outputs = decoder_dense(decoder_outputs)
# Create a model using the functional API provided by Keras.
# The functional API is great, it gives an amazing amount of freedom in architecture of your NN.
# A read worth your time: https://keras.io/getting-started/functional-api-guide/
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)
model.compile(optimizer=optimiser, loss=loss)
print(model.summary())
return model
def getXY():
X, y = list(), list()
for _ in range(100):
x = [random.random() for _ in range(input_sequence_length)]
y = [random.random() for _ in range(target_sequence_length)]
X.append([x,[0 for _ in range(input_sequence_length)]])
y.append(y)
return np.array(X), np.array(y)
X,y = getXY()
print(X,y)
model = getModel()
model.fit(X,y)
The error message i got is:
ValueError: Error when checking model input: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 2 array(s), but instead got the following list of 1
arrays:
what is the correct shape of the input data for the model?
If you read carefully the source of your inspiration, you will find that he talks about the "decoder_input" data.
He talks about the "teacher forcing" technique that consists of feeding the decoder with some delayed data. But also says that it didn't really work well in his case so he puts that initial state of the decoder to a bunch of 0 as this line shows:
decoder_input = np.zeros((decoder_output.shape[0], decoder_output.shape[1], 1))
in his design of the auto-encoder, they are two separate models that have different inputs, then he ties them with RNN stats from each other.
I can see that you have tried doing the same thing but you have appended np.array([x_encoder, x_decoder]) where you should have done [np.array(x_encoder), np.array(x_decoder)]. Each input to the network should be a numpy array that you put in a list of inputs, not one big numpy array.
I also found some typos in your code, you are appending y to itself, where you should instead create a Y variable
def getXY():
X_encoder, X_decoder, Y = list(), list(), list()
for _ in range(100):
x_encoder = [random.random() for _ in range(input_sequence_length)]
# the decoder input is a sequence of 0's same length as target seq
x_decoder = [0]*len(target_sequence_length)
y = [random.random() for _ in range(target_sequence_length)]
X_encoder.append(x_encoder)
# Not really optimal but will work
X_decoder.append(x_decoder)
Y.append(y)
return [np.array(X_encoder), np.array(X_decoder], np.array(Y)
now when you do :
X, Y = getXY()
you receive X which is a list of 2 numpy arrays (as your model requests) and Y which is a single numpy array.
I hope this helps
EDIT
Indeed, in the code that generates the dataset, you can see that they build 3 dimensions np arrays for the input. RNN needs 3 dimensional inputs :-)
The following code should address the shape issue:
def getXY():
X_encoder, X_decoder, Y = list(), list(), list()
for _ in range(100):
x_encoder = [random.random() for _ in range(input_sequence_length)]
# the decoder input is a sequence of 0's same length as target seq
x_decoder = [0]*len(target_sequence_length)
y = [random.random() for _ in range(target_sequence_length)]
X_encoder.append(x_encoder)
# Not really optimal but will work
X_decoder.append(x_decoder)
Y.append(y)
# Make them as numpy arrays
X_encoder = np.array(X_encoder)
X_decoder = np.array(X_decoder)
Y = np.array(Y)
# Make them 3 dimensional arrays (with third dimension being of size 1) like the 1d vector: [1,2] can become 2 de vector [[1,2]]
X_encoder = np.expand_dims(X_encoder, axis=2)
X_decoder = np.expand_dims(X_decoder, axis=2)
Y = np.expand_dims(Y, axis=2)
return [X_encoder, X_decoder], Y
I'm using LSTM with Keras to forecast a set of sequences. Here my basic model:
inputs = Input(shape=(1,seq_dim)) #seq_dim = 2
# shape = (timesteps, featdim) = (1,2) since my input sequences are pair of values
# I want to predict the sequence of the fist values in the pairs
se = LSTM(lstm_size)(inputs)
out = Dense(1)(se) # I want to forecast one value
model = Model(inputs=inputs, outputs=out)
I know for sure that the sequences start from 0 and are monotonic (not-decreasing).
I tried with the Maximum() layer
max_out = Maximum()([output_seq,input_seq])
Here the model
inputs = Input(shape=(1,seq_dim))
# shape = (timesteps, featdim) = (1,2) since my input sequences are pair of values
# I want to predict the sequence of the fist values in the pairs
se = LSTM(lstm_size)(inputs)
out = Dense(1)(se) # I want to forecast one value
# max between the output and the previous value of the sequence (current input)
max_out = Maximum()([out,inputs[:,:,0]])
model = Model(inputs=inputs, outputs=max_out)
however at compiling the model an error is raised:
"AttributeError: 'Tensor' object has no attribute '_keras_history'"
I've also tried with a Lambda layer but it raises the same error.
max_out = Lambda(lambda x: K_BACKEND.max(x))([out,inputs[:,:,0]])
How can I add this constrain to my model? Is it possible to do in the architecture definitio (as I'm trying to do), or by editing the loss function?
thanks in advance
Try this
max_out = Lambda( lambda oi: K_BACKEND.maximum( oi[0], oi[1][:,:,0], axis=-1)),output_shape=lambda oi : oi[0] )([out,inputs]).
I am receiving different ROC-AUC scores from sklearn's RandomForestClassifier and roc_curve, auc methods, respectively.
The following code got me an ROC-AUC (i.e. gs.best_score_) of 0.878:
def train_model(mod = None, params = None, features = None,
outcome = ...outcomes array..., metric = 'roc_auc'):
gs = GridSearchCV(mod, params, scoring=metric, loss_func=None, score_func=None,
fit_params=None, n_jobs=-1, iid=True, refit=True, cv=10, verbose=0,
pre_dispatch='2*n_jobs', error_score='raise')
gs.fit(...feature set df..., outcome)
print gs.best_score_
print gs.best_params_
return gs
model = RandomForestClassifier(random_state=2000, n_jobs=-1)
features_to_include = [...list of column names...]
parameters = {
'n_estimators': [...list...], 'max_depth':[...list...],
'min_samples_split':[...list...], 'min_samples_leaf':[...list...]
}
gs = train_model(mod = model, params = parameters, features = features_to_include)
Whereas, the following code got me an ROC-AUC of 0.97:
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr['micro'], tpr['micro'], _ = roc_curve(...outcomes array...,
gs.predict_proba(...feature set df...)[:, 1])
roc_auc['micro'] = auc(fpr['micro'], tpr['micro'])
Why is there such a difference? Did I do something wrong with my code?
Thanks!
Chris
They would return different values, for two reasons:
since the GridSearchCV method splits your data into 10 groups (you are doing 10-fold cross-validation in your code), uses 9 for training, and reports the AUC on the last group. The best_score_ you get is just the highest-reported AUC reported as such (more info read here). Your roc_curve calculation reports the AUC on the entire set.
The default cross-validation roc_auc is the macro version (see here), but your later computation computes the micro version.