anyone know why wordnet doesn't contain the word 'she'? thanks.
see this link
The answer to this is in the WordNet FAQ (which I just discovered existed), and also in this other question.
Basically, she is a pronoun - a word that kind of stands in place for a noun. Instead of referring to Betty by her name - which is a proper noun - you may refer to her as she.
Pronouns by themselves (without Betty, in this case) don't actually contain any meaning. Some people, like the WordNet people, call that kind of word closed-class words. By design, WordNet only includes open-class words.
From the Wordnet FAQ:
Q. Why is WordNet missing: of, an, the, and, about, above, because, etc. [and pronouns]
A. WordNet only contains "open-class words": nouns, verbs,
adjectives, and adverbs. Thus, excluded words include determiners,
prepositions, pronouns, conjunctions, and particles.
Wordnet isn't a standard dictionary, for example, if you search "he" you end up with the element helium and the 5th letter of the hebrew alphabet as definitions. However, you don't end up with the definition of a noun referring to a man.
My best guess is that "she" isn't contained because it doesn't have a definition of a anything other than a noun referring to a woman. I say best guess because I'm not a language expert so I can't definitively say it has no other definitions. If you look up "he" on thefreedictionary.com though it does have references to helium and hebrew. If you look up "she" it only has definitions related to gender.
tl;dr The reason "she" doesn't exist as a word in wordnet is that, by their rules (or whatever you want to call it) it isn't a word.
Related
I have a list of 20,000 words. I want to know which of the 20k words are "weird" in some way. This is part of a text cleaning task.
Albóndiga is fine, huticotai is no Spanish word I know... neither is 56%$3estapa
This means I must compare declined/conjugated words in isolation to some source of truth. Everyone recommends SpaCy. Fine.
Somehow, though, using the code below and a test file with a few dozen words, spaCy thinks they are all "ROOT" words. Si hablas castellano, sabrás que así no es.
technically, I don't want to lemmatize anything! I want to stem the words. I just want to pair down the 20k-long wordlist to something I as a Spanish-speaking linguist can look at to determine what sorts of of crazy desmadre (B.S.) is going on.
Here is an example of the output I get:
trocito NOUN ROOT trocito
ayuntamiento NOUN ROOT ayuntamiento
eyre NOUN ROOT eyre
suscribíos NOUN ROOT suscribío
mezcal ADJ ROOT mezcal
marivent VERB ROOT mariventir
inversores NOUN ROOT inversor
stenger VERB ROOT stenger
Clearly, "stenger" is not a Spanish word, though naïvely, spaCy thinks it is. Mezcal is a NOUN (and a very good time). You get the picture.
Here is my code:
import spacy
nlp = spacy.load("es_core_news_sm")
new_lst = []
with open("vocabu_suse.txt", 'r') as lst:
for i in lst:
# print(i)
new_lst.append(i.strip())
for i in new_lst:
j = nlp(i)
for token in j:
print(token.text, token.pos_, token.dep_, token.lemma_)
I'm pretty confused about what you're trying to do here. To be clear, my understanding is that your main objective is to find what words in your list aren't junk, and you tried using lemmatization to check them, and the lemmatization results seem wrong.
This means I must compare declined/conjugated words in isolation to some source of truth. Everyone recommends SpaCy. Fine.
spaCy is great for many NLP tasks but dealing with lists of words with no context is really not something it's intended for, and I don't think it's going to help you much here.
Finding Real Words
To address your main problem...
First, to find out what words aren't junk, you can use a wordlist you trust or a relatively normal large corpus.
If you don't have a wordlist you trust, you can see if the words are in the vocab of the spaCy models. spaCy's vocabs can include junk words, but since they're built by frequency it should only include common errors. This wordfreq repo may also be helpful.
To check if a word has a word vector in spaCy, in the medium or large models you can use tok.is_oov.
If you want to go from a corpus, use a corpus you have or Wikipedia or something, and discard words below a certain word count threshold. (I understand inflection makes this more difficult, but with a sufficiently large corpus you should still be finding real words with some frequency.)
About the ROOT
The ROOT tag is a dependency tag, not a tag of word form. In a dependency parse of a sentence the ROOT is typically the main verb. In a sentence of one word, that word is always the root.
I think you want the tag_ attribute, which is a detailed language-specific POS tag. The pos_ attribute is a coarse-grained tag designed for multi-lingual applications.
About Lemmatization
I don't understand this statement.
technically, I don't want to lemmatize anything! I want to stem the words.
Lemmatization is almost always better than stemming. "Stemming" usually refers to a rule-based process that works based on token patterns, so for your particular problem it won't help you at all - it'll just tell you if words have common word endings or something. You could have a word like "asdfores" and a stemmer would happily tell you it's the plural of "asdfor". Lemmatizers are often based on databases of words that have been checked so it's closer to what you need.
In either case, spaCy doesn't do stemming.
If you are trying to clean up a list of words, you could use a PoS tagger that works with a dictionary, such as Freeling. I think that the output analysis level morfo could be suitable for your needs. As an example:
cat input.txt | analyzer -f es.cfg --flush --outlv morfo --noprob
Output:
trocito trocito NCMS000 -1
ayuntamiento ayuntamiento NCMS000 -1
eyre
suscribíos suscribir+os VMM02P0+PP2CP00 -1
mezcal mezcal NCMS000 -1
marivent
inversores inversor AQ0MP00 -1 inversor NCMP000 -1
stenger
You should check the entries without a tag assigned ("eyre", "marivent", "stenger"), not present in the dictionary.
Please note that the tool includes some kind of "guessing" for malformed words, so maybe you should pay attention to entries including '_' in their lemma as well.
$ echo "hanticipación" | analyzer -f es.cfg --flush --outlv morfo --noprob
Output:
hanticipación h_anticipación NCFS000 -1
I have a word list containing about 30000 unique words.
I would like to group this list of words based on how similar these words tend to be.
Can I create a ontology tree using this list and with possibly with the help of WordNet?
So essentially what I want to do is aggregate these words in some meaningful way to reduce the size of the list. What kind of techniques can I use to do this?
You could certainly use Wordnet to make a first step towards clustering these words according to their SYNSET. In addition to 'same meaning' and 'opposite meaning' Wordnet also includes 'part of' relationships. Following these relationships for the word 'beer' for example visits all of these containing synsets: Brew1, Alcohol1, Drug_of_abuse1, Drug1, Agent3, Substance7, Matter3, Physical_entity1, Entity1, Causal_agent1, Beverage1, Liquid1, Fluid1, Substance1, Part1, Relation1, Abstraction6, Food1.
But ... it will depend on what kind of words you have as to how many you will find in Wordnet. It doesn't include verb tenses and it doesn't have a very large or very modern set of proper nouns. If you 30,000 words are adjectives and nouns it should do quite well.
How to decide whether in a sentence a word is infinitive or not?
For example here "fixing" is infinitive:
Fixing the door was also easy but fixing the window was very hard.
But in
I am fixing the door
it is not. How do people disambiguate these cases?
To elaborate on my comment:
In PoS tagging, choosing between a gerund (VBG) and a noun (NN) is quite subtle and has many special cases. My understanding is fixing should be tagged as a gerund in your first sentence, because it can be modified by an adverb in that context. Citing from the Penn PoS tagging guidelines (page 19):
"While both nouns and gerunds can be preceded by an article or a possessive pronoun, only a noun (NN) can be modified by an adjective, and only a gerund (VBG) can be modified by an adverb."
EXAMPLES:
Good/JJ cooking/NN is something to enjoy.
Cooking/VBG well/RB is a useful skill.
Assuming you meant 'automatically disambiguate', this task requires a bit of processing (pos-tagging and syntactic parsing). The idea is to find instances of a verb that are not preceded by an agreeing Subject Noun Phrase. If you also want to catch infinitive forms like "to fix", just add that to the list of forms you are looking for.
I'm currently taking a Natural Language Processing course at my University and still confused with some basic concept. I get the definition of POS Tagging from the Foundations of Statistical Natural Language Processing book:
Tagging is the task of labeling (or tagging) each word in a sentence
with its appropriate part of speech. We decide whether each word is a
noun, verb, adjective, or whatever.
But I can't find a definition of Shallow Parsing in the book since it also describe shallow parsing as one of the utilities of POS Tagging. So I began to search the web and found no direct explanation of shallow parsing, but in Wikipedia:
Shallow parsing (also chunking, "light parsing") is an analysis of a sentence which identifies the constituents (noun groups, verbs, verb groups, etc.), but does not specify their internal structure, nor their role in the main sentence.
I frankly don't see the difference, but it may be because of my English or just me not understanding simple basic concept. Can anyone please explain the difference between shallow parsing and POS Tagging? Is shallow parsing often also called Shallow Semantic Parsing?
Thanks before.
POS tagging would give a POS tag to each and every word in the input sentence.
Parsing the sentence (using the stanford pcfg for example) would convert the sentence into a tree whose leaves will hold POS tags (which correspond to words in the sentence), but the rest of the tree would tell you how exactly these these words are joining together to make the overall sentence. For example an adjective and a noun might combine to be a 'Noun Phrase', which might combine with another adjective to form another Noun Phrase (e.g. quick brown fox) (the exact way the pieces combine depends on the parser in question).
You can see how parser output looks like at http://nlp.stanford.edu:8080/parser/index.jsp
A shallow parser or 'chunker' comes somewhere in between these two. A plain POS tagger is really fast but does not give you enough information and a full blown parser is slow and gives you too much. A POS tagger can be thought of as a parser which only returns the bottom-most tier of the parse tree to you. A chunker might be thought of as a parser that returns some other tier of the parse tree to you instead. Sometimes you just need to know that a bunch of words together form a Noun Phrase but don't care about the sub-structure of the tree within those words (i.e. which words are adjectives, determiners, nouns, etc and how do they combine). In such cases you can use a chunker to get exactly the information you need instead of wasting time generating the full parse tree for the sentence.
POS tagging is a process deciding what is the type of every token from a text, e.g. NOUN, VERB, DETERMINER, etc. Token can be word or punctuation.
Meanwhile shallow parsing or chunking is a process dividing a text into syntactically related group.
Pos Tagging output
My/PRP$ dog/NN likes/VBZ his/PRP$ food/NN ./.
Chunking output
[NP My Dog] [VP likes] [NP his food]
The Constraint Grammar framework is illustrative. In its simplest, crudest form, it takes as input POS-tagged text, and adds what you could call Part of Clause tags. For an adjective, for example, it could add #NN> to indicate that it is part of an NP whose head word is to the right.
In POS_tagger, we tag words using a "tagset" like {noun, verb, adj, adv, prob...}
while shallow parser try to define sub-components such as Name Entity and phrases in the sentence like
"I'm currently (taking a Natural (Language Processing course) at (my University)) and (still confused with some basic concept.)"
D. Jurafsky and J. H. Martin say in their book, that shallow parse (partial parse) is a parse that doesn't extract all the possible information from the sentence, but just extract valuable in the specific case information.
Chunking is just a one of the approaches to shallow parsing. As it was mentioned, it extracts only information about basic non-recursive phrases (e.g. verb phrases or noun phrases).
Other approaches, for example, produce flatted parse trees. These trees may contain information about part-of-speech tags, but defer decisions that may require semantic or contextual factors, such as PP attachments, coordination ambiguities, and nominal compound analyses.
So, shallow parse is the parse that produce a partial parse tree. Chunking is an example of such parsing.
When do I use each ?
Also...is the NLTK lemmatization dependent upon Parts of Speech?
Wouldn't it be more accurate if it was?
Short and dense: http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html
The goal of both stemming and lemmatization is to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form.
However, the two words differ in their flavor. Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma .
From the NLTK docs:
Lemmatization and stemming are special cases of normalization. They identify a canonical representative for a set of related word forms.
Lemmatisation is closely related to stemming. The difference is that a
stemmer operates on a single word without knowledge of the context,
and therefore cannot discriminate between words which have different
meanings depending on part of speech. However, stemmers are typically
easier to implement and run faster, and the reduced accuracy may not
matter for some applications.
For instance:
The word "better" has "good" as its lemma. This link is missed by
stemming, as it requires a dictionary look-up.
The word "walk" is the base form for word "walking", and hence this
is matched in both stemming and lemmatisation.
The word "meeting" can be either the base form of a noun or a form
of a verb ("to meet") depending on the context, e.g., "in our last
meeting" or "We are meeting again tomorrow". Unlike stemming,
lemmatisation can in principle select the appropriate lemma
depending on the context.
Source: https://en.wikipedia.org/wiki/Lemmatisation
Stemming just removes or stems the last few characters of a word, often leading to incorrect meanings and spelling. Lemmatization considers the context and converts the word to its meaningful base form, which is called Lemma. Sometimes, the same word can have multiple different Lemmas. We should identify the Part of Speech (POS) tag for the word in that specific context. Here are the examples to illustrate all the differences and use cases:
If you lemmatize the word 'Caring', it would return 'Care'. If you stem, it would return 'Car' and this is erroneous.
If you lemmatize the word 'Stripes' in verb context, it would return 'Strip'. If you lemmatize it in noun context, it would return 'Stripe'. If you just stem it, it would just return 'Strip'.
You would get same results whether you lemmatize or stem words such as walking, running, swimming... to walk, run, swim etc.
Lemmatization is computationally expensive since it involves look-up tables and what not. If you have large dataset and performance is an issue, go with Stemming. Remember you can also add your own rules to Stemming. If accuracy is paramount and dataset isn't humongous, go with Lemmatization.
There are two aspects to show their differences:
A stemmer will return the stem of a word, which needn't be identical to the morphological root of the word. It usually sufficient that related words map to the same stem,even if the stem is not in itself a valid root, while in lemmatisation, it will return the dictionary form of a word, which must be a valid word.
In lemmatisation, the part of speech of a word should be first determined and the normalisation rules will be different for different part of speech, while the stemmer operates on a single word without knowledge of the context, and therefore cannot discriminate between words which have different meanings depending on part of speech.
Reference http://textminingonline.com/dive-into-nltk-part-iv-stemming-and-lemmatization
The purpose of both stemming and lemmatization is to reduce morphological variation. This is in contrast to the the more general "term conflation" procedures, which may also address lexico-semantic, syntactic, or orthographic variations.
The real difference between stemming and lemmatization is threefold:
Stemming reduces word-forms to (pseudo)stems, whereas lemmatization reduces the word-forms to linguistically valid lemmas. This difference is apparent in languages with more complex morphology, but may be irrelevant for many IR applications;
Lemmatization deals only with inflectional variance, whereas stemming may also deal with derivational variance;
In terms of implementation, lemmatization is usually more sophisticated (especially for morphologically complex languages) and usually requires some sort of lexica. Satisfatory stemming, on the other hand, can be achieved with rather simple rule-based approaches.
Lemmatization may also be backed up by a part-of-speech tagger in order to disambiguate homonyms.
As MYYN pointed out, stemming is the process of removing inflectional and sometimes derivational affixes to a base form that all of the original words are probably related to. Lemmatization is concerned with obtaining the single word that allows you to group together a bunch of inflected forms. This is harder than stemming because it requires taking the context into account (and thus the meaning of the word), while stemming ignores context.
As for when you would use one or the other, it's a matter of how much your application depends on getting the meaning of a word in context correct. If you're doing machine translation, you probably want lemmatization to avoid mistranslating a word. If you're doing information retrieval over a billion documents with 99% of your queries ranging from 1-3 words, you can settle for stemming.
As for NLTK, the WordNetLemmatizer does use the part of speech, though you have to provide it (otherwise it defaults to nouns). Passing it "dove" and "v" yields "dive" while "dove" and "n" yields "dove".
An example-driven explanation on the differenes between lemmatization and stemming:
Lemmatization handles matching “car” to “cars” along
with matching “car” to “automobile”.
Stemming handles matching “car” to “cars” .
Lemmatization implies a broader scope of fuzzy word matching that is
still handled by the same subsystems. It implies certain techniques
for low level processing within the engine, and may also reflect an
engineering preference for terminology.
[...] Taking FAST as an example,
their lemmatization engine handles not only basic word variations like
singular vs. plural, but also thesaurus operators like having “hot”
match “warm”.
This is not to say that other engines don’t handle synonyms, of course
they do, but the low level implementation may be in a different
subsystem than those that handle base stemming.
http://www.ideaeng.com/stemming-lemmatization-0601
Stemming is the process of removing the last few characters of a given word, to obtain a shorter form, even if that form doesn't have any meaning.
Examples,
"beautiful" -> "beauti"
"corpora" -> "corpora"
Stemming can be done very quickly.
Lemmatization on the other hand, is the process of converting the given word into it's base form according to the dictionary meaning of the word.
Examples,
"beautiful" -> "beauty"
"corpora" -> "corpus"
Lemmatization takes more time than stemming.
I think Stemming is a rough hack people use to get all the different forms of the same word down to a base form which need not be a legit word on its own
Something like the Porter Stemmer can uses simple regexes to eliminate common word suffixes
Lemmatization brings a word down to its actual base form which, in the case of irregular verbs, might look nothing like the input word
Something like Morpha which uses FSTs to bring nouns and verbs to their base form
Huang et al. describes the Stemming and Lemmatization as the following. The selection depends upon the problem and computational resource availability.
Stemming identifies the common root form of a word by removing or replacing word suffixes (e.g. “flooding” is stemmed as “flood”), while lemmatization identifies the inflected forms of a word and returns its base form (e.g. “better” is lemmatized as “good”).
Huang, X., Li, Z., Wang, C., & Ning, H. (2020). Identifying disaster related social media for rapid response: a visual-textual fused CNN architecture. International Journal of Digital Earth, 13(9), 1017–1039. https://doi.org/10.1080/17538947.2019.1633425
Stemming and Lemmatization both generate the foundation sort of the inflected words and therefore the only difference is that stem may not be an actual word whereas, lemma is an actual language word.
Stemming follows an algorithm with steps to perform on the words which makes it faster. Whereas, in lemmatization, you used a corpus also to supply lemma which makes it slower than stemming. you furthermore might had to define a parts-of-speech to get the proper lemma.
The above points show that if speed is concentrated then stemming should be used since lemmatizers scan a corpus which consumes time and processing. It depends on the problem you’re working on that decides if stemmers should be used or lemmatizers.
for more info visit the link:
https://towardsdatascience.com/stemming-vs-lemmatization-2daddabcb221
Stemming
is the process of producing morphological variants of a root/base word. Stemming programs are commonly referred to as stemming algorithms or stemmers.
Often when searching text for a certain keyword, it helps if the search returns variations of the word.
For instance, searching for “boat” might also return “boats” and “boating”. Here, “boat” would be the stem for [boat, boater, boating, boats].
Lemmatization
looks beyond word reduction and considers a language’s full vocabulary to apply a morphological analysis to words. The lemma of ‘was’ is ‘be’ and the lemma of ‘mice’ is ‘mouse’.
I did refer this link,
https://towardsdatascience.com/stemming-vs-lemmatization-2daddabcb221
In short, the difference between these algorithms is that only lemmatization includes the meaning of the word in the evaluation. In stemming, only a certain number of letters are cut off from the end of the word to obtain a word stem. The meaning of the word does not play a role in it.
In short:
Lemmatization: uses context to transform words to their
dictionary(base) form also known as Lemma
Stemming: uses the stem of the word, most of the time removing derivational affixes.
source