Detecting language using Stanford NLP - nlp

I'm wondering if it is possible to use Stanford CoreNLP to detect which language a sentence is written in? If so, how precise can those algorithms be?

Almost certainly there is no language identification in Stanford COreNLP at this moment. 'almost' - because nonexistence is much harder to prove.
EDIT: Nevertheless, below are circumstantial evidences:
there is no mention of language identification neither on main
page, nor CoreNLP page, nor in FAQ (although there is
a question 'How do I run CoreNLP on other languages?'), nor in 2014
paper of CoreNLP's authors;
tools that combine several NLP libs
including Stanford CoreNLP use another lib for language
identification, for example DKPro Core ASL; also other
users talking about language identification and CoreNLP don't mention this capability
source file of CoreNLP contains Language
classes, but nothing related to language identification - you can
check manually for all 84 occurrence of 'language' word here
Try TIKA, or TextCat, or Language Detection Library for Java (they report "99% over precision for 53 languages").
In general, quality depends on the size of input text: if it is long enough (say, at least several words and not specially chosen), then precision can be pretty good - about 95%.

Standford CoreNLP doesn't have language ID (at least not yet), see http://nlp.stanford.edu/software/corenlp.shtml
There are loads more on language detection/identification tools. But do take the reported precision with a pinch of salt. It is usually evaluated narrowly, bounded by:
a fix list of languages,
a substantial length of the test sentences and
of the same language and
a skewed proportion of training to testing instances.
Notable language ID tools includes:
TextCat (http://cran.r-project.org/web/packages/textcat/index.html)
CLD2 (https://code.google.com/p/cld2/)
LingPipe (http://alias-i.com/lingpipe/demos/tutorial/langid/read-me.html)
LangID (https://github.com/saffsd/langid.py)
CLD3 (https://github.com/google/cld3)
An exhaustive list from meta-guide.com, see http://meta-guide.com/software-meta-guide/100-best-github-language-identification/
Noteworthy Language Identification related shared task (with training/testing data) includes:
Native Language ID (NLI 2013)
Discriminating Similar Languages (DSL 2014)
TweetID (2015)
Also take a look at:
Language Identification: The Long and the Short of the Matter
The Problems of Language Identification within Hugely Multilingual Data Sets
Selecting and Weighting N-Grams to Identify 1100 Languages
Indigenous Tweets
Microblog Language Identification: Overcoming the Limitations of Short, Unedited and Idiomatic Text

Related

Difference between dependencies(basic and enhanced) from Stanford CoreNLP?

Difference between basic-dependencies,collapsed-dependencies and collapsed-ccprocessed-dependencies in Stanford CoreNLP and how to use them to understand query?
A good way to see the difference by example is with the online demo (corenlp.run). basic, collapsed, and cc-processed are roughly the old ("Stanford Dependencies") equivalents to basic, enhanced, and enhanced++ in the newer ("Universal Dependencies") representation.
At a high level, the basic dependencies are meant to be easier to parse -- e.g., they're always a tree, the label set is small, etc. The enhanced[++] dependencies (like their predecessors, "collapsed" and "cc-processed") are deterministic transformations on the basic dependencies that are intended to make them a bit easier to work with, and a bit more semantic. For example, by labelling a preposition on the arc (prep:of in Stanford Dependencies; nmod:of in Universal Dependencies).
The full documentation of the differences (for Universal Dependencies) can be found in: Schuster and Manning (2016). "Enhanced English Universal Dependencies: An Improved Representation for Natural Language Understanding Tasks". The original Stanford Dependencies are perhaps best documented in the Stanford Dependencies Manual.

What's the difference between Stanford Tagger, Parser and CoreNLP?

I'm currently using different tools from Stanford NLP Group and trying to understand the differences between them. It seems to me that somehow they intersect each other, since I can use same features in different tools (e.g. tokenize, and POS-Tag a sentence can be done by Stanford POS-Tagger, Parser and CoreNLP).
I'd like to know what's the actual difference between each tool and in which situations I should use each of them.
All Java classes from the same release are the same, and, yes, they overlap. On a code basis, the parser and tagger are basically subsets of what is available in CoreNLP, except that they do have a couple of little add-ons of their own, such as the GUI for the parser. In terms of provided models, the parser and tagger come with models for a range of languages, whereas CoreNLP ships only with English out of the box. However, you can then download language-particular jars for CoreNLP which provide all the models we have for different languages. Anything that is available in any of the releases is present in the CoreNLP github site: https://github.com/stanfordnlp/CoreNLP

Accuracy: ANNIE vs Stanford NLP vs OpenNLP with UIMA

My work is planning on using a UIMA cluster to run documents through to extract named entities and what not. As I understand it, UIMA have very few NLP components packaged with it. I've been testing GATE for awhile now and am fairly comfortable with it. It does ok on normal text, but when we run it through some representative test data, the accuracy drops way down. The text data we have internally is sometimes all caps, sometimes all lowercase, or a mix of the two in the same document. Even using ANNIE's all caps rules, the accuracy still leaves much to be desired. I've recently heard of Stanford NLP and OpenNLP but haven't had time to extensively train and test them. How do those two compare in terms of accuracy with ANNIE? Do they work with UIMA like GATE does?
Thanks in advance.
It's not possible/reasonable to give a general estimate on performance of these systems. As you said, on your test data the accuracy declines. That's for several reasons, one is the language characteristics of your documents, another is characteristics of the annotations you are expecting to see. Afaik for every NER task there are similar but still different annotation guidelines.
Having that said, on your questions:
ANNIE is the only free open source rule-based NER system in Java I could find. It's written for news articles and I guess tuned for the MUC 6 task. It's good for proof of concepts, but getting a bit outdated. Main advantage is that you can start improving it without any knowledge in machine learning, nlp, well maybe a little java. Just study JAPE and give it a shot.
OpenNLP, Stanford NLP, etc. come by default with models for news articles and perform (just looking at results, never tested them on a big corpus) better than ANNIE. I liked the Stanford parser better than OpenNLP, again just looking at documents, mostly news articles.
Without knowing what your documents look like I really can't say much more. You should decide if your data is suitable for rules or you go the machine learning way and use OpenNLP or Stanford parser or Illinois tagger or anything. The Stanford parser seems more appropriate for just pouring your data, training and producing results, while OpenNLP seems more appropriate for trying different algorithms, playing with parameters, etc.
For your GATE over UIMA dispute, I tried both and found more viral community and better documentation for GATE. Sorry for giving personal opinions :)
Just for the record answering the UIMA angle: For both Stanford NLP and OpenNLP, there is excellent packaging as UIMA analysis engines available via the DKPro Core project.
I would like to add one more note. UIMA and GATE are two frameworks for the creation of Natural Language Processing(NLP) applications. However, Name Entity Recognition (NER) is a basic NLP component and you can find an implementation of NER, independent of UIMA and GATE. The good news is you can usually find a wrapper for a decent NER in the UIMA and GATE. To make it clear let see this example:
OpenNLP NER
A wrapper for OpenNLP NER in GATE
A wrapper for OpenNLP NER in UIMA
It is the same for the Stanford NER component.
Coming back to your question, this website lists the state of the art NERs:
http://www.aclweb.org/aclwiki/index.php?title=Named_Entity_Recognition_(State_of_the_art)
For example, in the MUC-7 competition, best participant named LTG got the result with the accuracy of 93.39%.
http://www.aclweb.org/aclwiki/index.php?title=MUC-7_(State_of_the_art)
Note that if you want to use such a state of are implementation, you may have some issue with their license.

Are there published generative grammars for natural languages?

I have some ideas to do with natural language processing. I will need some grammars of the
S -> NP VP
variety in order to play with them.
If I try to write these rules myself it will be a tedious and error-prone business. Has anyone ever typed up and released comprehensive rule sets for English and other natural languages? Ideally written in BNF, Prolog or similar syntax.
My project only relates to context-free grammars, I'm not interested in statistical methods or machine learning -- I need to systematically produce Engligh-like and Foobarian-like sentences.
If you know where to find such materiel, I'd very much appreciate it.
You might want to look at Attempto Controlled English and its Prolog-based tools.
Since statistical parsing came in vogue in the early 90s, grammars have usually not been distributed, except for specific problem domains, but derived from distributed corpora such as the Penn Treebank. If you can get a hold of that (I believe a sample is distributed with NLTK), you can "roll your own" grammar by looking at all tree fragments and translating them to rules. (E.g., if you find a node labeled S with children labeled NP and VP, you know there should be a rule S -> NP VP. Pruning the rules that occur infrequently would be a good idea.)
The most comprehensive context-free grammar for English that I know of is the one described in:
Gazdar, Gerald; Ewan H. Klein, Geoffrey K. Pullum, Ivan A. Sag. 1985. Generalized Phrase Structure Grammar. Oxford: Blackwell.
There are also several rule-based but non-context-free grammars freely available online, e.g., the Penn XTAG grammar or the HPSG English Resource Grammar.
Look into the Grammatical Framework. It is a functional programming language for multilingual grammar applications which comes with libraries for ~30 languages, among them English.

Natural language processing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Question is maybe ( about 100%) subjective but I need advices. What is best language for natural language processing ? I know Java and C++ but is there easier way to do it. To be more specific I need to process texts from lot of sites and get information.
As I said in comments, the question is not about a language, but about suitable library. And there are a lot of NLP libraries in both Java and C++. I believe you must inspect some of them (in both languages) and then, when you will know all the plenty of available libraries, create some kind of "big plan", how to implement your task. So, here I'll just give you some links with a brief explanation what is what.
Java
GATE - it is exactly what its name means - General Architecture for Text Processing. Application in GATE is a pipeline. You put language processing resources like tokenizers, POS-taggers, morphological analyzers, etc. on it and run the process. The result is represented as a set of annotations - meta information, attached to a peace of text (e.g. token). In addition to great number of plugins (including plugins for integration with other NLP resources like WordNet or Stanford Parser), it has many predefined dictionaries (cities, names, etc.) and its own regex-like language JAPE. GATE comes with its own IDE (GATE Developer), where you can try your pipeline setup, and then save it and load from Java code.
UIMA - or Unstructured Information Management Applications. It is very similar to GATE in terms of architecture. It also represents pipeline and produces set of annotations. Like GATE, it has visual IDE, where you can try out your future application. The difference is that UIMA mostly concerns information extraction while GATE performs text processing without explicit consideration of its purpose. Also UIMA comes with simple REST server.
OpenNLP - they call themselves organization center for open source projects on NLP, and this is the most appropriate definition. Main direction of development is to use machine learning algorithms for the most general NLP tasks like part-of-speech tagging, named entity recognition, coreference resolution and so on. It also has good integration with UIMA, so its tools are also available.
Stanford NLP - probably best choice for engineers and researchers with NLP and ML knowledge. Unlike libraries like GATE and UIMA, it doesn't aim to provide as much tools as possible, but instead concentrates on idiomatic models. E.g. you don't have comprehensive dictionaries, but you can train probabilistic algorithm to create it! In addition to its CoreNLP component, that provides most wildly used tools like tokenization, POS tagging, NER, etc., it has several very interesting subprojects. E.g. their Dependency framework allows you to extract complete sentence structure. That is, you can, for example, easily extract information about subject and object of a verb in question, which is much harder using other NLP tools.
C++
UIMA - yes, there are complete implementations for both Java and C++.
Stanford Parser - some Stanford's projects are only in Java, others - only in C++, and some of them are available in both languages. You can find many of them here.
APIs
A number of web service APIs perform specific language processing, including:
Alchemy API - language identification, named entity recognition, sentiment analysis and much more! Take a look at their main page - it is quite self-descriptive.
OpenCalais - this service tries to build giant graph of everything. You pass it a web page URL and it enriches this page text with found entities, together with relations between them. For example, you pass it a page with "Steve Jobs" and it returns "Apple Inc." (roughly speaking) together with probability that this is the same Steve Jobs.
Other recommendations
And yes, you should definitely take a look at Python's NLTK. It is not only a powerful and easy-to-use NLP library, but also a part of excellent scientific stack created by extremely friendly community.
Update (2017-11-15): 7 years later there are even more impressive tools, cool algorithms and interesting tasks. One comprehensive description may be found here:
https://tomassetti.me/guide-natural-language-processing/
Python and NLTK
ScalaNLP, which is a Natural Language Processing library written in Scala, seems suitable for your job.
I would recommend Python and NLTK.
Some hints and notes I can pinpoint based on my experience using it:
Python has efficient list, strings handling. You can index lists very efficiently what in natural language should be a fact. Also has nice syntactic delicacies, for example to access the first 100 words of a list, you can index as list[:100] (compare it with stl in c++).
Python serialization is easy and native. The serialization modules make language processing corpus and text handling an easy task, one line of code.(compare it with the several lines using Boost or other libraries of C++)
NLTK provides classes for loading corpus, processing it, tagging, tokenization, grammars parsing, chunking, and a whole set of machine learning algorithms, among other stuff. Also it provides good resources for probabilistic models based on words distribution in text. http://www.nltk.org/book
If learning a new programming language is an obstacle, you can check openNLP for Java http://incubator.apache.org/opennlp/

Resources