Natural Language Processing(syntatctic,semantic,progmatic) Analysis - python-3.x

My text contains text="Ravi beated Ragu"
My Question will be "Who beated Ragu?"
The Answer Should come "Ravi" Using NLP
How to do this by natural language processing.
Kindly guide me to proceed with this by syntactic,semantic and progmatic analysis using python

I would suggest that you should read an introductory book on NLP to be familiar with the chain processes you are trying to achieve. You are trying to do question-answering , aren't you? If it is the case, you should read about question-answering systems. The above sentence has to be morphologically analyzed (so read about morphological analyzers), syntactically parsed (so read about syntactic parsing) and semantically understood (so read about anaphora resolution and , in linguistics, theta theory). Ravi is called agent and Ragu is called patient or experiencer. Only then, you can proceed to pursue your objectives.
I hope this helps you!

You need lots of data.
What you are looking for might be reading comprehension, thus I recommend you read the papers(for instance: Reading Comprehension on the SQuAD Dataset) by the competitors from SQuAD.
For simple tasks(implying limited generalization) and if you don't have such large size corpus, you can use Regex by manually writing some rules.

Related

Logical Semantics, Information Extraction and Summarization

I want to know a general idea about these questions, in the field of data analysis and NLP.
What are the steps included? If I want to retrieve the meaningful information from any domain-specific text and understand the general idea of any text.
Another question, the larger the size of the analyzed text the better is the result?
Excuse my ignorance. I want to understand more and it would help me a lot if you suggested some tutorials or readings.
I suggest "Speech and Language Processing" by Daniel Jurafsky & James H. Martin. The last chapters are about Information Extraction and Summarization.
As for your question about the text size, it depends. From my experience, Information Extraction works better with short sentences. However, you will need a large data set to train your system in recognizing relevant patterns.

Elementary Sentence Construction

I am working in NLP Project and I am looking for parser to construct simple Sentences from complex one, written in C# . Since Sentences may have complex grammatical structure with multiple embedded clauses.
Any Help ?
Text summarisation and sentence simplification are very much an open research area. Wikipedia has articles about both, you can start from there. Beware: this is hard problem and chances are the state of the art system is far worse than you might expect. There isn't an off-the-shelf piece of software that you can just grab and solve all your problems. You will have some success with the more basic sentences, but performance will degrade as you complex sentence gets more complex. Have a look at the articles referenced on Wikipedia or google around to get an idea of what is possible. My impression is most readily available software packages are for academic purposes and might take a bit of work to get running.

Analysing meaning of sentences

Are there any tools that analyze the meaning of given sentences? Recommendations are greatly appreciated.
Thanks in advance!
I am also looking for similar tools. One thing I found recently was this sentiment analysis tool built by researchers at Stanford.
It provides a model of analyzing the sentiment of a given sentence. It's interesting and even this seemingly simple idea is quite involved to model in an accurate way. It utilizes machine learning to develop higher accuracy as well. There is a live demo where you can input sentences to analyze.
http://nlp.stanford.edu/sentiment/
I also saw this RelEx semantic dependency relationship extractor.
http://wiki.opencog.org/w/Sentence_algorithms
Some natural language understanding tools can analyze the meaning of sentences, including NLTK and Attempto Controlled English. There are several implementations of discourse representation structures and semantic parsers with a similar purpose.
There are also several parsers that can be used to generate a meaning representation from the text that is being parsed.

NLP: Language Analysis Techniques and Algorithms

Situation:
I wish to perform a Deep-level Analysis of a given text, which would mean:
Ability to extract keywords and assign importance levels based on contextual usage.
Ability to draw conclusions on the mood expressed.
Ability to hint on the education level (word does this a little bit though, but something more automated)
Ability to mix-and match phrases and find out certain communication patterns
Ability to draw substantial meaning out of it, so that it can be quantified and can be processed for answering by a machine.
Question:
What kind of algorithms and techniques need to be employed for this?
Is there a software that can help me in doing this?
When you figure out how to do this please contact DARPA, the CIA, the FBI, and all other U.S. intelligence agencies. Contracts for projects like these are items of current research worth many millions in research grants. ;)
That being said you'll need to process it in layers and analyze at each of those layers. For items 2 and 3 you'll find training an SVM on n-tuples (try, 3) words will help. For 1 and 4 you'll want deeper analysis. Use a tool like NLTK, or one of the many other parsers and find the subject words in sentences and related words. Also use WordNet (from Princeton)
to find the most common senses used and take those as key words.
5 is extremely challenging, I think intelligent use of the data above can give you what you want, but you'll need to use all your grammatical knowledge and programming knowledge, and it will still be very rough grained.
It sounds like you might be open to some experimentation, in which case a toolkit approach might be best? If so, look at the NLTK Natural Language Toolkit for Python. Open source under the Apache license, and there are a couple of excellent books about it (including one from O'Reilly which is also released online under a creative commons license).

Evaluate the content of a paragraph

We are building a database of scientific papers and performing analysis on the abstracts. The goal is to be able to say "Interest in this topic has gone up 20% from last year". I've already tried key word analysis and haven't really liked the results. So now I am trying to move onto phrases and proximity of words to each other and realize I'm am in over my head. Can anyone point me to a better solution to this, or at very least give me a good term to google to learn more?
The language used is python but I don't think that really affects your answer. Thanks in advance for the help.
It is a big subject, but a good introduction to NLP like this can be found with the NLTK toolkit. This is intended for teaching and works with Python - ie. good for dabbling and experimenting. Also there's a very good open source book (also in paper form from O'Reilly) on the NLTK website.
This is just a guess; not sure if this approach will work. If you're looking at phrases and proximity of words, perhaps you can build up a Markov Chain? That way you can get an idea of the frequency of certain phrases/words in relation to others (based on the order of your Markov Chain).
So you build a Markov Chain and frequency distribution for the year 2009. Then you build another one at the end of 2010 and compare the frequencies (of certain phrases and words). You might have to normalize the text though.
Other than that, something that comes to mind is Natural-Language-Processing techniques (there is a lot of literature surrounding the topic!).

Resources