Determine Transfer Learning Strategy for NER task - nlp

I worked on a Transfer Learning project in which I created a training dataset (labeled) and I used a pre-trained BERT model and fine-tuned it. The project was an NLP project in which I performed customized named entities recognition.
I'm working now on documenting the work so I have to specify which strategy of transfer learning I did.
i found this blog https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a
but even after reading it I still confused about the strategy and I want to make sure of my choice.

Do not try to come up with an abstract entity such as "strategy" - just describe what you have done.

Related

Building on existing models on spacy

This is a question regarding training models on SPACY3.x.
I couldn't find a good answer/solution on StackOverflow hence the query.
If I am using the existing model in spacy like the en model and want to add my own entities in the model and train it, let's say since I work in the biomedical domain, things like virus name, shape, length, temperature, temperature value, etc. I don't want to lose the entities tagged by Spacy like organization names, country, etc.
All suggestions are appreciated.
Thanks
There are a few ways to do that.
The best way is to train your own model separately and then combine both models in one pipeline, with one before the other. See the double NER example project for an overview of that.
It's also possible to update the pretrained NER model, see this example project. However this isn't usually a good idea, and definitely not if you're adding completely different entities. You'll run into what's called "catastrophic forgetting", where even though you're technically updating the model, it ends up forgetting everything not represented in your current training data.

How to create a custom BERT language model for a different language?

I want to create a language translation model using transformers. However, Tensorflow seems to only have a BERT model for English https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4 . If I want a BERT for another language, what is the best way to go about accomplishing this? Should I create a new BERT or can I train Tensorflow's own BertTokenizer on another language?
The Hugging Face model hub contains a plethora of pre-trained monolingual and multilingual transformers (and relevant tokenizers) which can be fine-tuned for your downstream task.
However, if you are unable to locate a suitable model for you language, then yes training from scratch is the only option. Beware though that training from scratch can be a resource-intensive task that will require significant compute power. Here is an excellent blog post to get you started.

BERT fine tuning

I'm trying to create my model for question answering based on BERT und can't understand what is the meaning of fine tuning. Do I understand it right, that it is like adaption for specific domain? And if I want to use it with Wikipedia corpora, I just need to integrate unchanged pre-trained model in my network?
Fine tuning is adopting (refining) the pre-trained BERT model to two things:
Domain
Task (e.g. classification, entity extraction, etc.).
You can use pre-trained models as-is at first and if the performance is sufficient, fine tuning for your use case may not be needed.
Finetuning is more like adopting the pre-trained model to the downstream task. However, recent state-of-the-art proves that finetuning doesn't help much with QA tasks. See also the following post.

How to we add a new face into trained face recognition model(inception/resnet/vgg) without retraining complete model?

Is it possible to add a new face features into trained face recognition model, without retraining it with previous faces?
Currently am using facenet architecture,
Take a look in Siamese Neural Network.
Actually if you use such approach you don't need to retrain the model.
Basically you train a model to generate an embedding (a vector) that maps similar images near and different ones far.
After you have this model trainned, when you add a new face it will be far from the others but near of the samples of the same person.
basically, by the mathematics theory behind the machine learning models, you basically need to do another train iteration with only this new data...
but, in practice, those models, especially the sophisticated ones, rely on multiple iterations for training and a various technics of suffering and nose reductions
a good approach can be train of the model from previous state with a subset of the data that include the new data, for a couple of iterations

opennlp sample training data for disease

I'm using OpenNLP for data classification. I could not find TokenNameFinderModel for disease here. I know I can create my own model but I was wondering is there any large sample training data available for disease?
You can easily create your own training data-set using the modelbuilder addon and follow some rules as mentioned here to train create a good NER model.
you can find some help using modelbuilder addon here.
It is basically, you put all the information in a text file and the NER entities in another. The addon searches for a paticular entity and replace it with the required tag. Hence producing the tagged data. It must be pretty easy to use this tool!
Hope this helps!

Resources