french lemmatization Core NLP - nlp

I'm trying to use Stanford CoreNLP for French texts.
I have two questions:
I want to know if french lemmatization is available with Core NLP?
In some cases the output dependencies do not make sense for example for the sentence "Le chat mange la souris" (the cat is eating the mouse) there is a problem in the token "mange" which is typed as adj and not verb, for that it's not considered as the root of sentence.
But when I use the plurial "Les chats mangent la souris" it's correct.
Any help would be appreciated!

At this time we do not have a French language lemmatizer.
We will be releasing a new French dependencies model soon with our official 3.7.0 release. I am curious though, how are you generating dependencies, with the "parse" annotator or "depparse" annotator?

Thanks for your response.
I use the following configuration for the parse and depparse methods:
StanfordCoreNLP pipeline = new StanfordCoreNLP(
PropertiesUtils.asProperties(
"annotators", "tokenize, ssplit, pos, depparse, parse",
"tokenize.language", "fr",
"pos.model", "edu/stanford/nlp/models/pos- tagger/french/french.tagger",
"parse.model", "edu/stanford/nlp/models/lexparser/frenchFactored.ser.gz",
"depparse.model", "edu/stanford/nlp/models/parser/nndep/UD_French.gz"));

Related

What are some of the data preparation steps or techniques one needs to follow when dealing with multi-lingual data?

I'm working on multilingual word embedding code where I need to train my data on English and test it on Spanish. I'll be using the MUSE library by Facebook for the word-embeddings.
I'm looking for a way to pre-process both my data the same way. I've looked into diacritics restoration to deal with the accents.
I'm having trouble coming up with a way in which I can carefully remove stopwords, punctuations and weather or not I should lemmatize.
How can I uniformly pre-process both the languages to create a vocabulary list which I can later use with the MUSE library.
Hi Chandana I hope you're doing well. I would look into using the library spaCy https://spacy.io/api/doc the man that created it has a youtube video in which he discusses the implementation of of NLP in other languages. Below you will find code that will lemmatize and remove stopwords. as far as punctuation you can always set specific characters such as accent marks to ignore. Personally I use KNIME which is free and open source to do preprocessing. You will have to install nlp extentions but what is nice is that they have different extensions for different languages you can install here: https://www.knime.com/knime-text-processing the Stop word filter (since 2.9) and the Snowball stemmer node can be applied for Spanish language. Make sure to select the right language in the dialog of the node. Unfortunately there is no part of speech tagger node for Spanish so far.
# Create functions to lemmatize stem, and preprocess
# turn beautiful, beautifuly, beautified into stem beauti
def lemmatize_stemming(text):
stemmer = PorterStemmer()
return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v'))
# parse docs into individual words ignoring words that are less than 3 letters long
# and stopwords: him, her, them, for, there, ect since "their" is not a topic.
# then append the tolkens into a list
def preprocess(text):
result = []
for token in gensim.utils.simple_preprocess(text):
newStopWords = ['your_stopword1', 'your_stop_word2']
if token not in gensim.parsing.preprocessing.STOPWORDS and token not in newStopWords and len(token) > 3:
nltk.bigrams(token)
result.append(lemmatize_stemming(token))
return result
I hope this helps let me know if you have any questions :)

Stanford CoreNLP API fails to parse some sentences

I have been trying to use the Stanford CoreNLP API included in the 2015-12-09 release. I start the server using:
java -mx5g -cp "./*" edu.stanford.nlp.pipelinStanfordCoreNLPServer
The server works in general, but fails for some setnences including the following:
"Aside from her specifically regional accent, she reveals by the use of the triad, ``irritable, tense, depressed, a certain pedantic itemization that indicates she has some familiarity with literary or scientific language ( i.e., she must have had at least a high­school education ) , and she is telling a story she has mentally rehearsed some time before."
I end up with a result that starts with :
{"sentences":[{"index":0,"parse":"SENTENCE_SKIPPED_OR_UNPARSABLE","basic-dependencies":
I would greatly appriciate some help in setting this up - am I not including some annotators in the nlp pipeline.
This same sentence works at http://corenlp.run/
If you're looking for a dependency parse (like that in corenlp.run), you should look at the basic-dependencies field rather than the parse field. If you want a constituency parse, you should include the parse annotator in the list of annotators you are sending to the server. By default, the server does not include the parser annotator, as it's relatively slow.

How to NER and POS tag a pre-tokenized text with Stanford CoreNLP?

I'm using the Stanford's CoreNLP Named Entity Recognizer (NER) and Part-of-Speech (POS) tagger in my application. The problem is that my code tokenizes the text beforehand and then I need to NER and POS tag each token. However I was only able to find out how to do that using the command line options but not programmatically.
Can someone please tell me how programmatically can I NER and POS tag pretokenized text using Stanford's CoreNLP?
Edit:
I'm actually using the individual NER and POS instructions. So my code was written as instructed in the tutorials given in the Stanford's NER and POS packages. But I have CoreNLP in my classpath. So I have the CoreNLP in my classpath but using the tutorials in the NER and POS packages.
Edit:
I just found that there are instructions as how one can set the properties for CoreNLP here http://nlp.stanford.edu/software/corenlp.shtml but I wish if there was a quick way to do what I want with Stanford NER and POS taggers so I don't have to recode everything!
If you set the property:
tokenize.whitespace = true
then the CoreNLP pipeline will tokenize on whitespace rather than the default PTB tokenization. You may also want to set:
ssplit.eolonly = true
so that you only split sentences on newline characters.
To programmatically run a classifier over a list of tokens that you've already gotten via some other means, without a kludge like pasting them together with whitespace and then tokenizing again, you can use the Sentence.toCoreLabelList method:
String[] token_strs = {"John", "met", "Amy", "in", "Los", "Angeles"};
List<CoreLabel> tokens = edu.stanford.nlp.ling.Sentence.toCoreLabelList(token_strs);
for (CoreLabel cl : classifier.classifySentence(tokens)) {
System.out.println(cl.toShorterString());
}
Output:
[Value=John Text=John Position=0 Answer=PERSON Shape=Xxxx DistSim=463]
[Value=met Text=met Position=1 Answer=O Shape=xxxk DistSim=476]
[Value=Amy Text=Amy Position=2 Answer=PERSON Shape=Xxx DistSim=396]
[Value=in Text=in Position=3 Answer=O Shape=xxk DistSim=510]
[Value=Los Text=Los Position=4 Answer=LOCATION Shape=Xxx DistSim=449]
[Value=Angeles Text=Angeles Position=5 Answer=LOCATION Shape=Xxxxx DistSim=199]

Arabic lemmatization and Stanford NLP

I try to make lemmatization, ie identifying the lemma and possibly the Arabic root of a verb, for example:
يتصل ==> lemma (infinitive of the verb) ==> اتصل ==> root (triliteral root / Jidr thoulathi)
==> و ص ل
Do you think Stanford NLP can do that?
Best Regards,
The Stanford Arabic segmenter can't do true lemmatization. However, it is possible to train a new model to do something like stemming:
تكتبون ← ت+ كتب +ون
يتصل ← ي+ تصل
If it is very important that the output is real Arabic lemmas ("تصل" is not a true lemma), you might be better off with a tool like MADAMIRA (http://nlp.ldeo.columbia.edu/madamira/).
Elaboration: The Stanford Arabic segmenter produces its output character-by-character using only these operations (implemented in edu.stanford.nlp.international.arabic.process.IOBUtils):
Split a word between two characters
Transform lil- (للـ) into li+ al- (ل+ الـ)
Transform ta (ت) or ha (ه) into ta marbuta (ة)
Transform ya (ي) or alif (ا) into alif maqsura (ى)
Transform alif maqsura (ى) into ya (ي)
So lemmatizing يتصل to ي+ اتصل would require implementing an extra rule, i.e., to insert an alif after ya or ta. Lemmatization of certain irregular forms would be completely impossible (for example, نساء ← امرأة).
The version of the Stanford segmenter available for download also only breaks off pronouns and particles:
وسيكتشفونه ← و+ س+ يكتشفون +ه
However, if you have access to the LDC Arabic Treebank or a similarly rich source of Arabic text with morphological segmentation annotated, it is possible to train your own model to remove all morphological affixes, which is closer to lemmatization:
وسيكتشفونه ← و+ س+ ي+ كتشف +ون +ه
Note that "كتشف" is not a real Arabic word, but the segmenter should at least consistently produce "كتشف" for تكتشفين ,أكتشف ,يكتشف, etc. If this is acceptable, you would need to change the ATB preprocessing script to instead use the morphological segmentation annotations. You could do this by replacing the script called parse_integrated with a modified version like this: https://gist.github.com/futurulus/38307d98992e7fdeec0d
Then follow the instructions for "TRAINING THE SEGMENTER" in the README.
I am not sure if Stanford NLP toolkit has a lammetizer, but you can try
The state-of-the-art is Farasa Lemmatizer.
MADAMIRA for Arabic processing
Farasa Lemmatizer outperforms MADAMIRA Lemmatizer based on accuracy. With accuracy about 97.23% It gives +7% relative gain above MADAMIRA in lemmatization task.
You can read more about Farasa Lemmatizer from the following link:
https://arxiv.org/pdf/1710.06700.pdf

How to Extract subject Verb Object using nlp java

How to Extract SVO using NLP in java, i am new in nlp.i am currently using opennlp. but how to do in java with a perticular in java sentence.
LexicalizedParser lp = **new LexicalizedParser("englishPCFG.ser.gz");**
String[] sent = { "This", "is", "an", "easy", "sentence", "." };
Tree parse = (Tree) lp.apply(Arrays.asList(sent));
parse.pennPrint();
System.out.println();
TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
tp.print(parse);
getting an compilation error at
new LexicalizedParser("englishPCFG.ser.gz");**
The constructor LexicalizedParser(String) is undefined
it seems as if you are using new version of Stanford NLP parser.
in new version of this parser constructors are not used to allocate memory, instead we are having dedicated functions . you can use :
LexicalizedParser lp = LexicalizedParser.loadModel("englishPCFG.ser.gz");
you can use various overloads of this API.
Stanford documentation for various overloads of loadModel
This is code from the Stanford dependency parser, not from OpenNLP. Follow the example given in ParserDemo.java (and/or ParserDemo2.java) that's included in the stanford-parser directory and make sure that your demo code and the stanford-parser.jar in your classpath are from the same version of the parser. I suspect you are using a more recent version of the parser with older demo code.
You can use Stanford CoreNLP. Check answer here for "rough algorithm" how to get subject-predicate-object from a sentence.
You can use reverb. Check answer here for "reVerb" how to get information extraction from a sentence

Resources