alternative for maximum entropy for NLP model - nlp

I am working on a NLP model where the model identifies the ARGs given the PREDICATE.
I am using MaxEnt for the model.
My model works fine. I trained the model with a Train dataset, created all the features, and then tested it with a Test dataset.
I wanted to try this with some other package and not with MaxEnt.
Can someone suggest what else can I use?

Related

Hugging face transformer: model bio_ClinicalBERT not trained for any of the task?

This maybe the most beginner question of all :sweat:.
I just started learning about NLP and hugging face. The first thing I'm trying to do is to apply one the bioBERT models on some clinical note data and see what I do, before moving on to the fine-tuning the model. And it looks like "emilyalsentzer/Bio_ClinicalBERT" to be the closest model for my data.
But as I try to use it for any of the analyses I always get this warning.
Some weights of the model checkpoint at emilyalsentzer/Bio_ClinicalBERT were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight']
From the hugging face course chapter 2 I understand this meant.
This is because BERT has not been pretrained on classifying pairs of sentences, so the head of the pretrained model has been discarded and a new head suitable for sequence classification has been added instead. The warnings indicate that some weights were not used (the ones corresponding to the dropped pretraining head) and that some others were randomly initialized (the ones for the new head). It concludes by encouraging you to train the model, which is exactly what we are going to do now.
So I went on to test which NLP task I can use "emilyalsentzer/Bio_ClinicalBERT" for, out of the box.
from transformers import pipeline, AutoModel
checkpoint = "emilyalsentzer/Bio_ClinicalBERT"
nlp_task = ['conversational', 'feature-extraction', 'fill-mask', 'ner',
'question-answering', 'sentiment-analysis', 'text-classification',
'token-classification',
'zero-shot-classification' ]
for task in nlp_task:
print(task)
process = pipeline(task=task, model = checkpoint)
And I got the same warning message for all the NLP tasks, so it appears to me that I shouldn't/advised not to use the model for any of the tasks. This really confuses me. The original bio_clinicalBERT model paper stated that they had good results on a few different tasks. So certainly the model was trained for those tasks. I also have similar issue with other models as well, i.e. the blog or research papers said a model obtained good results with a specific task but when I tried to apply with pipeline it gives the warning message. Is there any reason why the head layers were not included in the model?
I only have a few hundreds clinical notes (also unannotated :frowning_face:), so it doesn't look like it's big enough for training. Is there any way I could use the model on my data without training?
Thank you for your time.
This Bio_ClinicalBERT model is trained for Masked Language Model (MLM) task. This task basically used for learning the semantic relation of the token in the language/domain. For downstream tasks, you can fine-tune the model's header with your small dataset, or you can use a fine-tuned model like Bio_ClinicalBERT-finetuned-medicalcondition which is the fine-tuned version of the same model. You can find all the fine-tuned models in HuggingFace by searching 'bio-clinicalBERT' as in the link.

how can I train a OpenAI fine tuned model with more prompts

I fine-tuned OpenAI model with some prompts following this documentation it succeeded and created a new model in the playground. How I can retrain (fine-tune) that same model with new prompts?
This is what I found out on Open AI Documentation
If you have already fine-tuned a model for your task and now have additional training data that you would like to incorporate, you can continue fine-tuning from the model. This creates a model that has learned from all of the training data without having to re-train from scratch.
i.e you can't train the already existing trained model again.

Pytorch based Bert NER for transfer learning/retraining

I trained an Bert-based NER model using Pytorch framework by referring the below article.
https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/.
After training the model using this approach, I saved that model using torch.save() method. Now, I want to retrain the model with new dataset.
Can someone please help me on how to perform retraining/transfer learning as I'm new to NLP and transformers.
Thanks in advance.
First, you can read the doc on pytorch loading, the model that can be very helpful to retrain the save model in a new dataset. I will provide the example of loading a save model. That can be very helpful to you
Original Doc :- https://pytorch.org/tutorials/beginner/saving_loading_models.html
Example code :- https://pythonguides.com/pytorch-load-model/
This two link are very helpful to train the new dataset on save model

Does training a tflite model require images annotated?

I am trying to implement TFLite model for food detection and segmentation. This is the model i chose suitable for my food images dataset: [https://tfhub.dev/s?deployment-format=lite&q=inception%20resnet%20v2].
I searched over google to understand how the images are required to be annotated, but only end up in confusion. I understand the dataset is converted to TFRecords and then fed to the pretrained model. But for training the model with custom dataset, does not it require an annotation file? I dont see any info about this on TF hub either.
Please can anyone help me on this!
The answer to your question is depends on what model do you plan to train.
In the case of a model for food detection and segmentation you do need annotations when training. If you do not provide the model with labeled training data as it is a supervised learning model it cannot learn from them.
If you were to train an autoencoder the data does not need to be annotated. Hope the keywords used in this answer help you out search for more information about the topic.

Using pretrained models in Pytorch for Semantic Segmentation, then training only the fully connected layers with our own dataset

I am learning Pytorch and trying to understand how the library works for semantic segmentation.
What I've understood so far is that we can use a pre-trained model in pytorch. I've found an article which was using this model in the .eval() mode but I have not been able to find any tutorial on using such a model for training on our own dataset. I have a very small dataset and I need transfer learning to get results. My goal is to only train the FC layers with my own data. How is that achievable in Pytorch without complicating the code with OOP or so many .py files. I have been having a hard time figuring out such repos in github as I am not the most proficient person when it comes to OOP. I have been using Keras for Deep Learning until recently and there everything is easy and straightforward. Do I have the same options in Pycharm?
I appreciate any guidance on this. I need to run a piece of code that does the semantic segmentation and I am really confused about many of the steps I need to take.
Assume you start with a pretrained model called model. All of this occurs before you pass the model any data.
You want to find the layers you want to train by looking at all of them and then indexing them using model.children(). Running this command will show you all of the blocks and layers.
list(model.children())
Suppose you have now found the layers that you want to finetune (your FC layers as you describe). If the layers you want to train are the last 5 you can grab all of the layers except for the last 5 in order to set their requires_grad params to False so they don't train when you run the training algorithm.
list(model.children())[-5:]
Remove those layers:
layer_list = list(model.children())[-5:]
Rebuild model using sequential:
model_small = nn.Sequential(*list(model.children())[:-5])
Set requires_grad params to False:
for param in model_small.parameters():
param.requires_grad = False
Now you have a model called model_small that has all of the layers except the layers you want to train. Now you can reattach the layers that your removed and they will intrinsically have the requires_grad param set to True. Now when you train the model it will only update the weights on those layers.
model_small.avgpool_1 = nn.AdaptiveAvgPool2d()
model_small.lin1 = nn.Linear()
model_small.logits = nn.Linear()
model_small.softmax = nn.Softmax()
model = model_small.to(device)

Resources