Identify prepositons and individual POS - nlp

I am trying to find correct parts of speech for each word in paragraph. I am using Stanford POS Tagger. However, I am stuck at a point.
I want to identify prepositions from the paragraph.
Penn Treebank Tagset says that:
IN Preposition or subordinating conjunction
how, can I be sure if current word is be preposition or subordinating conjunction. How can I extract only prepositions from paragraph in this case?

You can't be sure. The reason for this somewhat strange PoS is that it's really hard to automatically determine if, for example, for is a preposition or a subordinate conjunction. So in order for automatic taggers to have a better precision, this distinction is simply ignored. Note that there is also a tag TO, which is given to any occurrence of to, regardless of its function as a preposition, infinitive particle or whatever (I think there are others).
If you need to identify prepositions properly, you need to retrain a tagger with a modified tag set, or maybe train a classifier which takes PoS-tagged text and only does this final disambiguation.

I have had some breakthrough to understand if the word is actually preposition or subordinating conjunction.
I have parsed following sentence :
She left early because Mike arrived with his new girlfriend.
(here because is subordinating conjunction )
After POS tagging
She_PRP left_VBD early_RB because_IN Mike_NNP arrived_VBD with_IN
his_PRP$ new_JJ girlfriend_NN ._.
here , to make sure because is a preposition or not I have parsed the sentence.
here because has direct parent after IN as SBAR(Subordinate Clause) as root.
with also comes under IN but its direct parent will be PP so it is a preposition.
Example 2 :
Keep your hand on the wound until the nurse asks you to take it off.
(here until is coordinating conjunction )
POS tagging is :
Keep_VB your_PRP$ hand_NN on_IN the_DT wound_NN until_IN the_DT
nurse_NN asks_VBZ you_PRP to_TO take_VB it_PRP off_RP ._.
So , until and on are marked as IN.
However, picture gets clearer when we actually parse the sentence.
So finally I conclude because is subordinating conjunction and with is preposition.
Tried for many variations of sentences .. worked for almost all except some cases for before and after.

Related

Extracting <subject, predicate, object> triplet from unstructured text

I need to extract simple triplets from unstructured text. Usually it is of the form noun- verb- noun, so I have tried POS tagging and then extracting nouns and verbs from neighbourhood.
However it leads to lot of cases and gives low accuracy.
Will Syntactic/semantic parsing help in this scenario?
Will ontology based information extraction be more useful?
I expect that syntactic parsing would be the best fit for your scenario. Some trivial template-matching method with POS tags might work, where you find verbs preceded and followed by a single noun, and take the former to be the subject and the latter the object. However, it sounds like you've already tried something like that -- unless your neighborhood extraction ignores word order (which would be a bit silly - you'd be guessing which noun was the word and which was the object, and that's assuming exactly two nouns in each sentence).
Since you're looking for {s, v, o} triplets, chances are you won't need semantic or ontological information. That would be useful if you wanted more information, e.g. agent-patient relations or deeper knowledge extraction.
{s,v,o} is shallow syntactic information, and given that syntactic parsing is considerably more robust and accessible than semantic parsing, that might be your best bet. Syntactic parsing will be sensitive to simple word re-orderings, e.g. "The hamburger was eaten by John." => {John, eat, hamburger}; you'd also be able to specifically handle intransitive and ditransitive verbs, which might be issues for a more naive approach.

Uses/Applications of Part-of-speech-tagging (POS Tagging)

I understand the implicit value of part-of-speech tagging and have seen mentions about its use in parsing, text-to-speech conversion, etc.
Could you tell me how is the output of a PoS tagger formated ?
Also, could you explain how is such an output used by other tasks/parts of an NLP system?
One purpose of PoS tagging is to disambiguate homonyms.
For instance, take this sentence :
I fish a fish
The same sentence in french would be Je pĂȘche un poisson.
Without tagging, fish would be translated the same way in both case, which would lead to
a wrong traduction. However, after PoS tagging, the sentence would be
I_PRON fish_VERB a_DET fish_NOUN
From a computer point of view, both words are now distinct. This wat, they can be processed much more efficiently (in our example, fish_VERB will be translated to pĂȘche and fish_NOUN to poisson).
Basically, the goal of a POS tagger is to assign linguistic (mostly grammatical) information to sub-sentential units. Such units are called tokens and, most of the time, correspond to words and symbols (e.g. punctuation).
Considering the format of the output, it doesn't really matter as long as you get a sequence of token/tag pairs. Some POS taggers allow you to specify some specific output format, others use XML or CSV/TSV, and so on.

Infinitive form disambiguation

How to decide whether in a sentence a word is infinitive or not?
For example here "fixing" is infinitive:
Fixing the door was also easy but fixing the window was very hard.
But in
I am fixing the door
it is not. How do people disambiguate these cases?
To elaborate on my comment:
In PoS tagging, choosing between a gerund (VBG) and a noun (NN) is quite subtle and has many special cases. My understanding is fixing should be tagged as a gerund in your first sentence, because it can be modified by an adverb in that context. Citing from the Penn PoS tagging guidelines (page 19):
"While both nouns and gerunds can be preceded by an article or a possessive pronoun, only a noun (NN) can be modified by an adjective, and only a gerund (VBG) can be modified by an adverb."
EXAMPLES:
Good/JJ cooking/NN is something to enjoy.
Cooking/VBG well/RB is a useful skill.
Assuming you meant 'automatically disambiguate', this task requires a bit of processing (pos-tagging and syntactic parsing). The idea is to find instances of a verb that are not preceded by an agreeing Subject Noun Phrase. If you also want to catch infinitive forms like "to fix", just add that to the list of forms you are looking for.

What is the best way to classify following words in POS tagging?

I am doing POS tagging. Given the following tokens in the training set, is it better to consider each token as Word1/POStag and Word2/POStag or consider them as one word that is Word1/Word2/POStag ?
Examples: (the POSTag is not required to be included)
Bard/EMS
Interstate/Johnson
Polo/Ralph
IBC/Donoghue
ISC/Bunker
Bendix/King
mystery/comedy
Jeep/Eagle
B/T
Hawaiian/Japanese
IBM/PC
Princeton/Newport
editing/electronic
Heller/Breene
Davis/Zweig
Fleet/Norstar
a/k/a
1/2
Any suggestion is appreciated.
The examples don't seem to fall into one category with respect to the use of the slash -- a/k/a is a phrase acronym, 1/2 is a number, mystery/comedy indicates something in between the two words, etc.
I feel there is no treatment of the component words that would work for all the cases in question, and therefore the better option is to handle them as unique words. At decoding stage, when the tagger will probably be presented with more previously unseen examples of such words, the decision can often be made based on the context, rather than the word itself.

What Is the Difference Between POS Tagging and Shallow Parsing?

I'm currently taking a Natural Language Processing course at my University and still confused with some basic concept. I get the definition of POS Tagging from the Foundations of Statistical Natural Language Processing book:
Tagging is the task of labeling (or tagging) each word in a sentence
with its appropriate part of speech. We decide whether each word is a
noun, verb, adjective, or whatever.
But I can't find a definition of Shallow Parsing in the book since it also describe shallow parsing as one of the utilities of POS Tagging. So I began to search the web and found no direct explanation of shallow parsing, but in Wikipedia:
Shallow parsing (also chunking, "light parsing") is an analysis of a sentence which identifies the constituents (noun groups, verbs, verb groups, etc.), but does not specify their internal structure, nor their role in the main sentence.
I frankly don't see the difference, but it may be because of my English or just me not understanding simple basic concept. Can anyone please explain the difference between shallow parsing and POS Tagging? Is shallow parsing often also called Shallow Semantic Parsing?
Thanks before.
POS tagging would give a POS tag to each and every word in the input sentence.
Parsing the sentence (using the stanford pcfg for example) would convert the sentence into a tree whose leaves will hold POS tags (which correspond to words in the sentence), but the rest of the tree would tell you how exactly these these words are joining together to make the overall sentence. For example an adjective and a noun might combine to be a 'Noun Phrase', which might combine with another adjective to form another Noun Phrase (e.g. quick brown fox) (the exact way the pieces combine depends on the parser in question).
You can see how parser output looks like at http://nlp.stanford.edu:8080/parser/index.jsp
A shallow parser or 'chunker' comes somewhere in between these two. A plain POS tagger is really fast but does not give you enough information and a full blown parser is slow and gives you too much. A POS tagger can be thought of as a parser which only returns the bottom-most tier of the parse tree to you. A chunker might be thought of as a parser that returns some other tier of the parse tree to you instead. Sometimes you just need to know that a bunch of words together form a Noun Phrase but don't care about the sub-structure of the tree within those words (i.e. which words are adjectives, determiners, nouns, etc and how do they combine). In such cases you can use a chunker to get exactly the information you need instead of wasting time generating the full parse tree for the sentence.
POS tagging is a process deciding what is the type of every token from a text, e.g. NOUN, VERB, DETERMINER, etc. Token can be word or punctuation.
Meanwhile shallow parsing or chunking is a process dividing a text into syntactically related group.
Pos Tagging output
My/PRP$ dog/NN likes/VBZ his/PRP$ food/NN ./.
Chunking output
[NP My Dog] [VP likes] [NP his food]
The Constraint Grammar framework is illustrative. In its simplest, crudest form, it takes as input POS-tagged text, and adds what you could call Part of Clause tags. For an adjective, for example, it could add #NN> to indicate that it is part of an NP whose head word is to the right.
In POS_tagger, we tag words using a "tagset" like {noun, verb, adj, adv, prob...}
while shallow parser try to define sub-components such as Name Entity and phrases in the sentence like
"I'm currently (taking a Natural (Language Processing course) at (my University)) and (still confused with some basic concept.)"
D. Jurafsky and J. H. Martin say in their book, that shallow parse (partial parse) is a parse that doesn't extract all the possible information from the sentence, but just extract valuable in the specific case information.
Chunking is just a one of the approaches to shallow parsing. As it was mentioned, it extracts only information about basic non-recursive phrases (e.g. verb phrases or noun phrases).
Other approaches, for example, produce flatted parse trees. These trees may contain information about part-of-speech tags, but defer decisions that may require semantic or contextual factors, such as PP attachments, coordination ambiguities, and nominal compound analyses.
So, shallow parse is the parse that produce a partial parse tree. Chunking is an example of such parsing.

Resources