Huggingface has two libraries Pytorch_transformers 1.2.0 and transformers 4. and others? There are some papers using the code from pytorch_transformers and I am trying to implement some production based solutions on pytorch_transformers? Are the huggingface maintaining the "pytorch_transformers" library?
As mentioned on discuss.huggingface.co
pytorch_transformers was older name/version. It later got renamed to transformers after adding support for TF. [...]
So the answer is yes, under a new name.
Related
I want to run entity-linking for a project of mine. I used Spacy for the NER on a corpus of documents. Is there an existing linking model I can simply use to link the entities found?
The documentation I have found seems to be how to train a custom one.
Examples:
https://spacy.io/api/kb
https://github.com/explosion/spaCy/issues/4511
Thanks!
spaCy does not distribute pre-trained entity linking models. See here for some comments on why not.
I found one - Facebook GENRE:
https://github.com/facebookresearch/GENRE
I'm interested in NLP and I come up with Tensorflow and Bert, both seem to be from Google and both seem to be the best thing for Sentiment Analysis as of today but I don't understand what are they exactly and what is the difference between them... Can someone explain?
Tensorflow is an open-source library for machine learning that will let you build a deep learning model/architecture. But the BERT is one of the architectures itself. You can build many models using TensorFlow including RNN, LSTM, and even the BERT. The transformers like the BERT are a good choice if you just want to deploy a model on your data and you don't care about the deep learning field itself. For this purpose, I recommended the HuggingFace library that provides a straightforward way to employ a transformer model in just a few lines of code. But if you want to take a deeper look at these models, I will suggest you to learns about the well-known deep learning architectures for text data like RNN, LSTM, CNN, etc., and try to implement them using an ML library like Tensorflow or PyTorch.
Bert and Tensorflow is not different thing , There are not only 2, but many implementations of BERT. Most are basically equivalent.
The implementations that you mentioned are:
The original code by Google, in Tensorflow. https://github.com/google-research/bert
Implementation by Huggingface, in Pytorch and Tensorflow, that reproduces the same results as the original implementation and uses the same checkpoints as the original BERT article. https://github.com/huggingface/transformers
These are the differences regarding different aspects:
In terms of results, there is no difference in using one or the other, as they both use the same checkpoints (same weights) and their results have been checked to be equal.
In terms of reusability, HuggingFace library is probably more reusable, as it is designed specifically for that. Also, it gives you the freedom of choosing TensorFlow or Pytorch as deep learning framework.
In terms of performance, they should be the same.
In terms of community support (e.g. asking questions in github or stackoverflow about them), HuggingFace library is better suited, as there are a lot of people using it.
Apart from BERT, the transformers library by HuggingFace has implementations for lots of models: OpenAI GPT-2, RoBERTa, ELECTRA, ...
I am trying to understand what merge.txt file infers in tokenizers for RoBERTa model in HuggingFace library. However, nothing is said about it on their website. Any help is appreciated.
You can find a description here:
https://github.com/huggingface/transformers/issues/4777
https://github.com/huggingface/transformers/issues/1083#issuecomment-524303077
Spacy's site said they use universal dependencies scheme in their annotations specifications page. But when I parse "I love you", '''you''' was made a "dobj" of "love". There's no "dobj" in the universal dependency relations doc. So I have two questions :
How to get spacy to use the universal dependency relations?
How to get the doc for the relations spacy uses?
Spacy's provided models don't use UD dependencies for English or German. From the docs, where you can find tables for the dependency labels (https://spacy.io/api/annotation#dependency-parsing):
The individual labels are language-specific and depend on the training corpus.
For most other models / languages, UD dependencies are used.
How to get spacy to use the universal dependency relations?
According to the spaCy official documentation, all spaCy models are trained using the Universal Dependency Corpora which is language-specific. According to English, you can see the full list of labels from this link where you can find dojb listed as direct object.
How to get the doc for the relations spacy uses?
I don't know what you mean by doc. If you mean documentation, I already provided the official documentation when answering the first question. Also, you can use spacy.explain() to get faster results like so:
>>> import spacy
>>>
>>> spacy.explain('dobj')
direct object
>>>
>>> spcay.explain('nsubj')
nominal subject
Hope this answers your question!
What is the difference between using CoreNLP (https://stanfordnlp.github.io/CoreNLP/ner.html) and the standalone distribution Stanford NER (https://nlp.stanford.edu/software/CRF-NER.html) for doing Named Entity Recognition? I noticed that the standalone distribution comes with a GUI, but are there any other differences in terms of supported functionality?
I'm trying to decide which one to use for a commercial purpose. I'm working on English models only.
There's no difference in terms of what algorithm is run. I would suggest the full version since you can use the pipeline code. But both versions use the exact same code for the actual NER part.