NLP, Pre-trained models, BERT - nlp

I have a problem with training a transformer model. I am working on building a new transformer model. Now, the model is training on the corpus (Arabic text) with size 105GiB. The hyperparameters are the same as BERT, as the model is BERT. There is often exploding loss, and the value becomes larger and larger, as shown in the picture. Is there any interpretation and solution for this problem?
Picture:
[exploding loss]
(https://i.stack.imgur.com/Q8qjU.jpg)

Related

How to convert the output of pretrained Huggingface transformer model from classification to regression for fine-tuning on my data?

I am using a transformer model that was extended from huggingface (DNABERT). This is a pretrained classification model whose output I would like to convert to regression, then fine-tune that model on my own data. I imagine this process would be roughly the same for any BERT-based huggingface classification model. How would I go about doing this?

Backpropagation in bert

i would like to know when people say pretrained bert model, is it only the final classification neural network is trained
Or
Is there any update inside transformer through back propagation along with classification neural network
During pre-training, there is a complete training if the model (updation of weights). Moreover, BERT is trained on Masked Language Model objective and not classification objective.
In pre-training, you usually train a model with huge amount of generic data. Thus, it has to be fine-tuned with the task-specific data and task-specific objective.
So, if your task is classification on a dataset X. You fine-tune BERT accordingly. And now, you will be adding a task-specific layer (classification layer, in BERT they have used dense layer over [CLS] token). While fine-tuning, you update the pre-trained model weights as well as the new task-specific layer.

Would training a BERT Multi-Label Classifier for 100 labels decrease accuracy a lot?

I am trying to train a text classifier which would be able to classify a sentence as being of a certain query type. I have used the BERT Model and trained a Multi-Label classifier which does the job with 90% accuracy for about 20 labels.
My question is that if I have to train the model for 100/200 labels would the accuracy be impacted severely?
If your class distributions does not have a large overlap and you have the good amount of train data representing each class, your accuracy should not be severely impacted. For data hungry model like BERT its all about data. If you have large amount of data represent your 100/200 class you are good to go.

Anomaly detection in Text Classification

I have built a text classifier using OneClassSVM.
I have the training set which corresponds to only one label i.e("Yes") and I don't have the other("NO") label data. My task is to build a classifier which classifies the new unseen sentence(test data) as 1 if it is very similar to the training data. Else, it classifies as -1 i.e,(anomaly).
I have used Word2Vec to build the word embeddings for my training data. Then, I am using word-vector averaging with OneClassSVM to build a anomaly detector classifier.
This classifier is currently giving accuracy of about 50%-55%. I have to enhance this further to build a robust classifier.
Any suggestions to this problem would be helpful...
I'd suggest a very different approach since you have no training examples for the negative class at all.
You could train a language model on your training data. At inference time, you score the input with the language model, and classify it according to some threshold on the perplexity of the input sentence according to the LM.

how's the input word2vec get fine-tuned when training CNN

When I read the paper "Convolutional Neural Networks for Sentence Classification"-Yoon Kim-New York University, I noticed that the paper implemented the "CNN-non-static" model--A model with pre-trained vectors from word2vec,and all words— including the unknown ones that are randomly initialized, and the pre-trained vectors are fine-tuned for each task.
So I just do not understand how the pre-trained vectors are fine-tuned for each task. Cause as far as I know, the input vectors, which are converted from strings by word2vec.bin(pre-trained), just like image matrix, which can not change during training CNN. So, if they can, HOW? Please help me out, Thanks a lot in advance!
The word embeddings are weights of the neural network, and can therefore be updated during backpropagation.
E.g. http://sebastianruder.com/word-embeddings-1/ :
Naturally, every feed-forward neural network that takes words from a vocabulary as input and embeds them as vectors into a lower dimensional space, which it then fine-tunes through back-propagation, necessarily yields word embeddings as the weights of the first layer, which is usually referred to as Embedding Layer.

Resources