Extracting age related information from using nlp - nlp

I am new to NLP and I have been trying to extract age related information from raw text. I googled and didn't get any reliable library in any language for this requirement. It would be great if I can get any help in this. I am open to any language and it is not a constraint. It can be in Java, Python or any other language too. Any help would be much appreciated. Thanks in advance. Cheers!
Update:
I tried adding the annotators, mentioned by Stanford help, to my java parser and I am facing below exception :
ERROR: cannot create CorefAnnotator!
java.lang.RuntimeException: Error creating coreference system
at
edu.stanford.nlp.scoref.StatisticalCorefSystem.fromProps(StatisticalCorefSystem.java:58)
at edu.stanford.nlp.pipeline.CorefAnnotator.<init>(CorefAnnotator.java:66)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.coref(AnnotatorImplementations.java:220)
at edu.stanford.nlp.pipeline.AnnotatorFactories$13.create(AnnotatorFactories.java:515)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:85)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:375)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:139)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:135)
at com.dateparser.SUtime.SUAgeParser.makeNumericPipeline(SUAgeParser.java:85)
at com.dateparser.SUtime.SUAgeParser.<clinit>(SUAgeParser.java:60)
Caused by: java.lang.RuntimeException: Error initializing coref system
at edu.stanford.nlp.scoref.StatisticalCorefSystem.<init>(StatisticalCorefSystem.java:36)
at edu.stanford.nlp.scoref.ClusteringCorefSystem.<init>(ClusteringCorefSystem.java:24)
at edu.stanford.nlp.scoref.StatisticalCorefSystem.fromProps(StatisticalCorefSystem.java:48)
... 9 more
Caused by: java.io.IOException: Unable to open "edu/stanford/nlp/models/hcoref/md-model.ser" as class path, filename or URL
at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:485)
at edu.stanford.nlp.io.IOUtils.readObjectFromURLOrClasspathOrFileSystem(IOUtils.java:323)
at edu.stanford.nlp.hcoref.md.DependencyCorefMentionFinder.<init>(DependencyCorefMentionFinder.java:38)
at edu.stanford.nlp.hcoref.CorefDocMaker.getMentionFinder(CorefDocMaker.java:149)
at edu.stanford.nlp.hcoref.CorefDocMaker.<init>(CorefDocMaker.java:61)
at edu.stanford.nlp.scoref.StatisticalCorefSystem.<init>(StatisticalCorefSystem.java:34)
... 11 more
I upgraded to version 1.6.0 and also added stanford-corenlp-models-current.jar to the classpath. Please let me know if I am missing something
Update 1:
The exception was fixed after upgrading to 3.9.1. But I am getting the ouput as per:duration relation instead of per:age
private static AnnotationPipeline makePipeline() {
Properties props = new Properties();
props.setProperty("annotators",
"tokenize,ssplit,pos,lemma,ner,depparse,coref,kbp");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
return pipeline;
}
public static void parse(String str) {
try {
Annotation doc = new Annotation(str);
pipeline.annotate(doc);
ArrayList<CoreMap> resultRelations = new ArrayList<CoreMap>();
List<CoreMap> mentionsAnnotations = doc.get(MentionsAnnotation.class);
for (CoreMap currentCoreMap : mentionsAnnotations) {
System.out.println(currentCoreMap.get(TextAnnotation.class));
System.out.println(currentCoreMap.get(CharacterOffsetBeginAnnotation.class));
System.out.println(currentCoreMap.get(CharacterOffsetEndAnnotation.class));
System.out.println(currentCoreMap.get(NamedEntityTagAnnotation.class));
}
} catch (Exception e) {
}
}
Is this normal behaviour or am I doing something wrong?

You may find the KBP relation extractor useful.
Example text:
Joe Smith is 58 years old.
Command:
java -Xmx12g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,depparse,coref,kbp -file example.txt -outputFormat text
This should attach Joe Smith to 58 years old with the per:age relation.

Related

Preferences library is causing E/libc & E/Pref errors

I'm building a weather app using MVVM and retrofit and i recently added a PreferencesFragmentCompat subclass to implement some user settings using the preferences lib. After doing so, my app won't run and i keep getting these few lines of errors :
2020-04-08 00:54:12.346 18079-18079/? E/de.flogaweathe: Unknown bits set in runtime_flags: 0x8000
2020-04-08 00:54:12.410 18079-18079/com.nesoinode.flogaweather E/libc: Access denied finding property "ro.vendor.df.effect.conflict"
2020-04-08 00:54:12.421 18079-18110/com.nesoinode.flogaweather E/Perf: Fail to get file list com.nesoinode.flogaweather
2020-04-08 00:54:12.421 18079-18110/com.nesoinode.flogaweather E/Perf: getFolderSize() : Exception_1 = java.lang.NullPointerException: Attempt to get length of null array
2020-04-08 00:54:12.421 18079-18110/com.nesoinode.flogaweather E/Perf: Fail to get file list oat
2020-04-08 00:54:12.422 18079-18110/com.nesoinode.flogaweather E/Perf: getFolderSize() : Exception_1 = java.lang.NullPointerException: Attempt to get length of null array
I've got no idea what these are and i can't find any specific answers on stack or google.There are no indications on what is causing the error so i can't figure out if i'm doing something wrong or if it is a library issue. Any ideas?
Here's the SettingsFragment where i'm adding the preferences from an xml resource file :
class SettingsFragment : PreferenceFragmentCompat() {
override fun onCreatePreferences(savedInstanceState: Bundle?, rootKey: String?) {
addPreferencesFromResource(R.xml.settings_prefs)
}
}
And here's how i'm reading some values from the sharedPrefs:
class UnitProviderImpl(context: Context) : UnitProvider {
private val appContext = context.applicationContext
private val preferences:SharedPreferences
get() = PreferenceManager.getDefaultSharedPreferences(appContext)
override fun getUnitSystem(): String {
val selectedUnitSystemName = preferences.getString(UNIT_SYSTEM_KEY,
UnitSystem.SI.name.toLowerCase(Locale.ROOT))
return selectedUnitSystemName!!
}
}
I managed to figure out a solution to the issue after doing some more research. Firstly, i commented out all the code related to the preferences library (and the lib itself) and run the app again. The run was successful without any errors so that narrowed it down to the androidx.preference:preference-ktx:1.1.0 library itself since my code was reviewed and couldn't find any issues with it. Looking through the preference docs i figured i could try out a beta or alpha version that may have fixed this issue. And lo and behold, after using the androidx.preference:preference-ktx:1.1.0-beta01 beta version and uncommenting the relative code, everything worked once again.

Missing class: com.fasterxml.jackson.core.type.TypeReference. R8:Warning

After I updated Android Studio to 3.5, I am facing below warning whenever I build my Project.
Missing class: com.fasterxml.jackson.core.type.TypeReference
My project is using AndroidX. Here is the gist for my build.gradle(app)
https://gist.github.com/Arkar009/4ae5a05ff3435636bc605fee1fbdb050 . Can anyone know why this error occurs or alternative ideas to solve this error? Thanks in advances.
If you're super sure you will remember this line if you include Jackson later in your project, this does the trick (add it in your project's proguard-project.[txt|pro] file):
-dontwarn com.fasterxml.jackson.core.type.TypeReference
That class gets included somehow in the missing classes Set in R8 (I didn't go that far in R8's code), but you can skip the warning if you get that class in the list of patterns for "Don't Warn" rules (see com/android/tools/r8/R8.java):
List<ProguardConfigurationRule> synthesizedProguardRules = new ArrayList<>();
timing.begin("Strip unused code");
Set<DexType> classesToRetainInnerClassAttributeFor = null;
try {
Set<DexType> missingClasses = appView.appInfo().getMissingClasses();
missingClasses = filterMissingClasses(
missingClasses, options.getProguardConfiguration().getDontWarnPatterns());
if (!missingClasses.isEmpty()) {
missingClasses.forEach(
clazz -> {
options.reporter.warning(
new StringDiagnostic("Missing class: " + clazz.toSourceString()));
});
TBH, I was also bugged enough by this warning to get into R8, hope it helps!

Spark file stream syntax

JavaPairInputDStream<Text, BytesWritable> dStream = jsc.fileStream("/home/suv/junk/sparkInput");
when am running this code i am getting
java.lang.ClassCastException: java.lang.Object cannot be cast to org.apache.hadoop.mapreduce.InputFormat
I am unable to mention the input format in the file stream. How to give this input format.
This is the method signature i got
public <K,V,F extends org.apache.hadoop.mapreduce.InputFormat<K,V>> JavaPairInputDStream<K,V> fileStream(String directory).
In this how to specify the input format.
After wasting all my day...wrote a utility in scala..
class ZipFileStream {
def fileStream(path: String, ssc: StreamingContext): JavaPairInputDStream[Text, BytesWritable] = {
return ssc.fileStream[Text, BytesWritable, ZipFileInputFormat](path)
}
}
and refereed this from java.
Any better solution appreciated.
I have faced the same issue.
It seems to be a bug, which was fixed Spark 1.3.0
https://issues.apache.org/jira/browse/SPARK-5297

JDT SearchEngine throws a NullPointerException

I'm trying to use JDT SearchEngine to find references to a given object. But I'm getting a "NullPointerException" while invoking the "search" method of org.eclipse.jdt.core.search.SearchEngine.
Following is the error trace:
java.lang.NullPointerException at
org.eclipse.jdt.internal.core.search.BasicSearchEngine.findMatches(BasicSearchEngine.java:214)
at
org.eclipse.jdt.internal.core.search.BasicSearchEngine.search(BasicSearchEngine.java:515)
at
org.eclipse.jdt.core.search.SearchEngine.search(SearchEngine.java:582)
And following is the method I'm using to perform search:
private static void search(String elementName) { //elementName -> a method Name
try {
SearchPattern pattern = SearchPattern.createPattern(elementName, IJavaSearchConstants.METHOD,
IJavaSearchConstants.REFERENCES, SearchPattern.R_PATTERN_MATCH);
IJavaSearchScope scope = SearchEngine.createWorkspaceScope();
SearchRequestor requestor = new SearchRequestor() {
#Override
public void acceptSearchMatch(SearchMatch match) {
System.out.println("Element - " + match.getElement());
}
};
SearchEngine searchEngine = new SearchEngine();
SearchParticipant[] searchParticipants = new SearchParticipant[] { SearchEngine
.getDefaultSearchParticipant() };
searchEngine.search(pattern, searchParticipants, scope, requestor, null);
} catch (Exception e) {
e.printStackTrace();
}
}
Refer the "Variables" window of the following snapshot to check the values of the arguments passing to the "searchEngine.search()":
I think the the issue is because of the value of "scope" [Highlighted in 'BLACK' above].
Which means "SearchEngine.createWorkspaceScope()" doesn't return expected values in this case.
NOTE: Please note that this is a part of my program which runs as a stand-alone java program (not an eclipse plugin) using JDT APIs to parse a given source code (using JDT-AST).
Isn't it possible to use JDT SearchEngine in such case (non eclipse plugin program), or is this issue due to some other reason?
Really appreciate your answer on this.
No. You cannot use the search engine without openning a workspace. The reason is that the SearchEngine relies on the eclipse filesystem abstraction (IResource, IFile, IFolder, etc.). This is only available when the workspace is open.

Xtext/EMF how to do model-to-model transform?

I have a DSL in Xtext, and I would like to reuse the rules, terminals, etc. defined in my .xtext file to generate a configuration file for some other tool involved in the project. The config file uses syntax similar to BNF, so it is very similar to the actual Xtext content and it requires minimal transformations. In theory I could easily write a script that would parse Xtext and spit out my config...
The question is, how do I go about implementing it so that it fits with the whole ecosystem? In other words - how to do a Model to Model transform in Xtext/EMF?
If you have both metamodels(ecore,xsd,...), your best shot is to use ATL ( http://www.eclipse.org/atl/ ).
If I understand you correct you want to go from an xtext model to its EMF model. Here is a code example that achieves this, substitute your model specific where necessary.
public static BeachScript loadScript(String file) throws BeachScriptLoaderException {
try {
Injector injector = new BeachStandaloneSetup().createInjectorAndDoEMFRegistration();
XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class);
resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);
Resource resource = resourceSet.createResource(URI.createURI("test.beach"));
InputStream in = new ByteArrayInputStream(file.getBytes());
resource.load(in, resourceSet.getLoadOptions());
BeachScript model = (BeachScript) resource.getContents().get(0);
return model;
} catch (Exception e) {
throw new BeachScriptLoaderException("Exception Loading Beach Script " + e.toString(),e );
}

Resources