Issue with Relation Annotation (rel.manual) in Spacy's Prodigy tool - nlp

I am trying to build a relation extraction model via spacy's prodigy tool.
NOTE: ner.manual, ner.correct, rel.manual are all recipes provided by prodigy.
(ner.manual, ner.correct) The first step involved annotating and training a NER model that can predict entities (this step is done and model is obtained)
The next step involved annotating the relations between the entities. Now, this step could be done int wo different method.
i. Label the entities and relations all from scratch
ii. Use the trained NER model to predict the entities in the UI tool and make corrections to it if needed (similar to ner.correct) and label the relations between the entities
The issue I am now facing is, whenever I use the trained model in the recipe's loop (rel.manual), there is no entities predicted.
Could someone help me with this??
PS: There is no trailing whitespaces issue, i cross-verified it

Related

Markup for relation extraction task with lstm model

I'm trying to figure out how to properly markup for Relation Extraction task (I'm going to use the lstm model).
At the moment, I figured out that entities are highlighted using the
<e1>, </e1>, <e2> and </e2>
tags. And in a separate column, the class of the relationship is indicated.
But what to do in the case when in the sentence one entity has relations of the same type or different ones at once to two other entities.
An example in the image.
Or when there are four entities in one sentence and two relations are defined.
I have two options. The first is to introduce new tags
<e3>, </e3>, <e4> and </e4>
and do multi-class classification. But I haven't seen it done anywhere. The second option is to make a copy of the proposal and share the relationship in this way.
Can you please tell me how to do this markup?

How to construct the pipeline for ner & archive better results using spacy?

I'm currently trying to do named entity recognition for tweets using spacy. For that purpose i created with Gensim word vectors which I'm using to train a new blank ner model. At the moment I am a bit confused how the set up the pipeline for my purpose. Regarding this, I have the following question:
The pipeline which I'm currently using consists only of one ner component. Have someone recommendations for construction the pipeline (e.g. using tok2vec before ner)?
I also wonder if my approach of training a new model with previously created word vectors is the right one and how i could further improve my prediction accuracy.

Named Entity Recognition Model Training Data Creation

This question might be quite trivial and illogical, but I am a bit confused.
Does adding a article, which does not contain any entity tagged in it, into a NER training dataset add any value to model.
Does the model(specifically a spacy NER model) get some understanding of what not to tag as an entity out of this type of articles? Or is it not required and ignored while training?

Building on existing models on spacy

This is a question regarding training models on SPACY3.x.
I couldn't find a good answer/solution on StackOverflow hence the query.
If I am using the existing model in spacy like the en model and want to add my own entities in the model and train it, let's say since I work in the biomedical domain, things like virus name, shape, length, temperature, temperature value, etc. I don't want to lose the entities tagged by Spacy like organization names, country, etc.
All suggestions are appreciated.
Thanks
There are a few ways to do that.
The best way is to train your own model separately and then combine both models in one pipeline, with one before the other. See the double NER example project for an overview of that.
It's also possible to update the pretrained NER model, see this example project. However this isn't usually a good idea, and definitely not if you're adding completely different entities. You'll run into what's called "catastrophic forgetting", where even though you're technically updating the model, it ends up forgetting everything not represented in your current training data.

How to prevent the entities from being overwritten during pretrained language model being retrained in terms NER

in terms of NER(name entities recognition), assuming I have one pretrained langauge model, which could recognize some predefined entities, if I perform retraining on this model, and don't want to those predefined entities to be overwritten during retraining, what can I do? For example, in one BERT-based model, it has been trained to recognize Databricks as SKILL, in case of it won't be retained as PRODUCT, how can I do during the retraining process?
Thanks.

Resources