I am using the Google Cloud PHP Natural Language library, but I am struggling to extract the date from a sentence. The date would be in various formats e.g.
Tomorrow at 9am
Monday 23rd of July
01-01-2018
24-01-2018
01-24-2018 (if you're not British)
I can get this working with Dialofglow, so it knows when an entity is a date and/or datetime, but for some reason with this library it cannot figure out if something is a date and/or date time
Any help would be much appreciated
You can get dates out from text on Dialogflow because this is a Natural Language Understanding service, while Cloud NL API is only a Natural Language Processing service.
This means that on Cloud NL API you can only do some types of analyses, such as Syntax Analysis, Entity Analysis, Sentiment Analysis, Entity Sentiment Analysis and Content Classification.
You may say: "Wait, then why a date is not classified as an entity?" and the answer is that it could, but it's not a feature yet released on this product. You may consider to request this feature on GCP's Issue Tracker.
Related
There are some event description texts.
I want to extract the entrance fee of the events.
Sometimes the entrance fee is conditional.
What I want to achieve is to extract the entrance fee and it's conditions(if available). It's fine to retrieve the whole phrase or sentence which tells the entrance fee + it's conditions.
Note I: The texts are in German language.
Note II: Often the sentences are not complete, as they are mainly event flyers or advertisements.
What would be the category of this problem in NLP? Is it Named Entity Recognition and could be solved by training an own model with Apache openNLP?
Or I thought maybe easier would be to detect the pattern via the usual keywords in the use-case(entrance, $, but, only till, [number]am/pm, ...).
Please shed some light on me.
Input Examples:
- "If you enter the club before 10pm, the entrance is for free. Afterwards it is 6$."
- "Join our party tonight at 11pm till 5am. The entrance fee is 8$. But for girls and students it's half price."
This is broadly a structure learning problem. You might have to combine Named-Entity-Recognition/Tagging with Coreference Resolution. Read some papers on these as well as related github code and take it from there. Here is good discussion of state of the art tools for these at the moment https://www.reddit.com/r/MachineLearning/comments/3dz3fl/dl_architectures_for_entity_recognition_and_other/
Hope that helps.
You might try Stanford's CoreNLP for the named entity extraction part. It should be able to help you pick out the money values, and there is also a link to models trained for German language as well (https://nlp.stanford.edu/software/CRF-NER.shtml).
Given that it's fine to extract the entire sentence that contains the information, I'd suggest taking a binary sentence classification approach. You could probably get quite far just by using ngrams and some named entity information as features. That would mean that you'd need you'd want to build a pipeline that would automatically segment your documents into sentence-like chunks. You could try a sentence segmentation tool (also provided by Stanford CoreNLP) as a first go https://stanfordnlp.github.io/CoreNLP/. Since this would form the basis for all further work, you'd want to ensure that the results are at least decent. Perhaps the structure of the document itself gives you enough information to segment it without even using a sentence segmentation tool.
After you have this pipeline in place, you'd want to annotate the sentences extracted from a large set of documents as relevant or non-relevant to make it a binary classification task. Then train a model based on that dataset. Finally, when you apply it to unseen data, first use the sentence segmentation approach, and then classify each sentence.
Is there an effective Natural Language Processor that can fetch the phone number and date of birth from human speech. Each user has a different way of specifying the phone number and date of birth. Hence, converting speech to text and then parsing the text for phone number is not helpful.
You can use Google speech to text api. I had used same for entering account number for blind people. I was working for bank so I there were lots of numbers involved as input eg account number, card number etc.
With Google STT engine you can define custom voice inputs.
Also I had created feedback mechanism using Text to Speech Api so that app can tell if users feedback is invalid and request him to speak again.
You can see code snippet at github.
https://github.com/hiteshsahu/Android-TTS-STT
Easiest way is to extract text from speech, there is plenty of tools, proprietary (nuance), and tinker friendly open source like sphinx, and plenty of tools to extract dates and phones expressed differently. IBM Watson offers one, Smart Formatting beta, to uniform dates and phones in own transcripts. To guess which dates are birthdays you try detect related keywords (birth, born so on) nearby.
For few free alternatives, check
For phone #
https://www.npmjs.com/package/phone-number-extractor
https://github.com/googlei18n/libphonenumber
For date extractions check prev questions
Extracting dates from text in Java
Best way to identify and extract dates from text Python?
There is a patent for the process your are asking, but you might have to pay royalties or smth.
http://www.freepatentsonline.com/8416928.html
If you want to fetch the phone number and date of birth from human speech.
So, you can use another option and implement it.
https://cloud.google.com/speech/
This API is really useful for converting your speech to text. I also have this problem at one moment so you can try it too.
The another API which is really good for authentication.
https://api.ai/
I hope it helps you.
I've been working on a small, personal project which takes a user's job skills and suggests the most ideal career for them based on those skills. I use a database of job listings to achieve this. At the moment, the code works as follows:
1) Process the text of each job listing to extract skills that are mentioned in the listing
2) For each career (e.g. "Data Analyst"), combine the processed text of the job listings for that career into one document
3) Calculate the TF-IDF of each skill within the career documents
After this, I'm not sure which method I should use to rank careers based on a list of a user's skills. The most popular method that I've seen would be to treat the user's skills as a document as well, then to calculate the TF-IDF for the skill document, and use something like cosine similarity to calculate the similarity between the skill document and each career document.
This doesn't seem like the ideal solution to me, since cosine similarity is best used when comparing two documents of the same format. For that matter, TF-IDF doesn't seem like the appropriate metric to apply to the user's skill list at all. For instance, if a user adds additional skills to their list, the TF for each skill will drop. In reality, I don't care what the frequency of the skills are in the user's skills list -- I just care that they have those skills (and maybe how well they know those skills).
It seems like a better metric would be to do the following:
1) For each skill that the user has, calculate the TF-IDF of that skill in the career documents
2) For each career, sum the TF-IDF results for all of the user's skill
3) Rank career based on the above sum
Am I thinking along the right lines here? If so, are there any algorithms that work along these lines, but are more sophisticated than a simple sum? Thanks for the help!
The second approach you explained will work. But there are better ways to solve this kind of problem.
At first you should know a little bit about language models and leave the vector space model.
In the second step based on your kind of problem that is similar to expert finding/profiling you should learn a baseline language model framework to implement a solution.
You can implement A language modeling framework for expert finding with a little changes so that the formulas can be adapted to your problem.
Also reading On the assessment of expertise profiles will give you a better understanding of expert profiling with the framework above.
you can find some good ideas, resources and projects on expert finding/profiling at Balog's blog.
I would take SSRM [1] approach to expand query (job documents) using WordNet (extracted database [2]) as semantic lexicon - so you are not constrained only to direct word-vs-word matches. SSRM has its own similarity measure (I believe the paper is open-access, if not, check this: http://blog.veles.rs/document-similarity-computation-models-literature-review/, there are many similarity computation models listed). Alternativly, and if your corpus is big enough, you might try LSA/LSI[3,4] (also covered on the page) - without using external lexicon. But, if it is on English, WordNet's semantic graph is really rich in all directions (hyponims, synonims, hypernims... concepts/SinSet).
The bottom line: I would avoid simple SVM/TF-IDF for such concrete domain. I measured really serious margin of SSRM, over TF-IDF/VSM (measured as macro-average F1, 5-class single label classification, narrow domain).
[1] A. Hliaoutakis, G. Varelas, E. Voutsakis, E.G.M. Petrakis, E. Milios, Information Retrieval by Semantic Similarity, Int. J. Semant. Web Inf. Syst. 2 (2006) 55–73. doi:10.4018/jswis.2006070104.
[2] J.E. Petralba, An extracted database content from WordNet for Natural Language Processing and Word Games, in: 2014 Int. Conf. Asian Lang. Process., 2014: pp. 199–202. doi:10.1109/IALP.2014.6973502.
[3] P.W. Foltz, Latent semantic analysis for text-based research, Behav. Res. Methods, Instruments, Comput. 28 (1996) 197–202. doi:10.3758/BF03204765.
[4] A. Kashyap, L. Han, R. Yus, J. Sleeman, T. Satyapanich, S. Gandhi, T. Finin, Robust semantic text similarity using LSA, machine learning, and linguistic resources, Springer Netherlands, 2016. doi:10.1007/s10579-015-9319-2.
I'm embarking on a project for a non-profit organization to help process and classify 1000's of reports annually from their field workers / contractors the world over. I'm relatively new to NLP and as such wanted to seek the group's guidance on the approach to solve our problem.
I'll highlight the current process, and our challenges and would love your help on the best way to solve our problem.
Current process: Field officers submit reports from locally run projects in the form of best practices. These reports are then processed by a full-time team of curators who (i) ensure they adhere to a best-practice template and (ii) edit the documents to improve language/style/grammar.
Challenge: As the number of field workers increased the volume of reports being generated has grown and our editors are now becoming the bottle-neck.
Solution: We would like to automate the 1st step of our process i.e., checking the document for compliance to the organizational best practice template
Basically, we need to ensure every report has 3 components namely:
1. States its purpose: What topic / problem does this best practice address?
2. Identifies Audience: Who is this for?
3. Highlights Relevance: What can the reader do after reading it?
Here's an example of a good report submission.
"This document introduces techniques for successfully applying best practices across developing countries. This study is intended to help low-income farmers identify a set of best practices for pricing agricultural products in places where there is no price transparency. By implementing these processes, farmers will be able to get better prices for their produce and raise their household incomes."
As of now, our approach has been to use RegEx and check for keywords. i.e., to check for compliance we use the following logic:
1 To check "states purpose" = we do a regex to match 'purpose', 'intent'
2 To check "identifies audience" = we do a regex to match with 'identifies', 'is for'
3 To check "highlights relevance" = we do a regex to match with 'able to', 'allows', 'enables'
The current approach of RegEx seems very primitive and limited so I wanted to ask the community if there is a better way to solving this problem using something like NLTK, CoreNLP.
Thanks in advance.
Interesting problem, i believe its a thorough research problem! In natural language processing, there are few techniques that learn and extract template from text and then can use them as gold annotation to identify whether a document follows the template structure. Researchers used this kind of system for automatic question answering (extract templates from question and then answer them). But in your case its more difficult as you need to learn the structure from a report. In the light of Natural Language Processing, this is more hard to address your problem (no simple NLP task matches with your problem definition) and you may not need any fancy model (complex) to resolve your problem.
You can start by simple document matching and computing a similarity score. If you have large collection of positive examples (well formatted and specified reports), you can construct a dictionary based on tf-idf weights. Then you can check the presence of the dictionary tokens. You can also think of this problem as a binary classification problem. There are good machine learning classifiers such as svm, logistic regression which works good for text data. You can use python and scikit-learn to build programs quickly and they are pretty easy to use. For text pre-processing, you can use NLTK.
Since the reports will be generated by field workers and there are few questions that will be answered by the reports (you mentioned about 3 specific components), i guess simple keyword matching techniques will be a good start for your research. You can gradually move to different directions based on your observations.
This seems like a perfect scenario to apply some machine learning to your process.
First of all, the data annotation problem is covered. This is usually the most annoying problem. Thankfully, you can rely on the curators. The curators can mark the specific sentences that specify: audience, relevance, purpose.
Train some models to identify these types of clauses. If all the classifiers fire for a certain document, it means that the document is properly formatted.
If errors are encountered, make sure to retrain the models with the specific examples.
If you don't provide yourself hints about the format of the document this is an open problem.
What you can do thought, is ask people writing report to conform to some format for the document like having 3 parts each of which have a pre-defined title like so
1. Purpose
Explains the purpose of the document in several paragraph.
2. Topic / Problem
This address the foobar problem also known as lorem ipsum feeling text.
3. Take away
What can the reader do after reading it?
You parse this document from .doc format for instance and extract the three parts. Then you can go through spell checking, grammar and text complexity algorithm. And finally you can extract for instance Named Entities (cf. Named Entity Recognition) and low TF-IDF words.
I've been trying to do something very similar with clinical trials, where most of the data is again written in natural language.
If you do not care about past data, and have control over what the field officers write, maybe you can have them provide these 3 extra fields in their reports, and you would be done.
Otherwise; CoreNLP and OpenNLP, the libraries that I'm most familiar with, have some tools that can help you with part of the task. For example; if your Regex pattern matches a word that starts with the prefix "inten", the actual word could be "intention", "intended", "intent", "intentionally" etc., and you wouldn't necessarily know if the word is a verb, a noun, an adjective or an adverb. POS taggers and the parsers in these libraries would be able to tell you the type (POS) of the word and maybe you only care about the verbs that start with "inten", or more strictly, the verbs spoken by the 3rd person singular.
CoreNLP has another tool called OpenIE, which attempts to extract relations in a sentence. For example, given the following sentence
Born in a small town, she took the midnight train going anywhere
CoreNLP can extract the triple
she, took, midnight train
Combined with the POS tagger for example; you would also know that "she" is a personal pronoun and "took" is a past tense verb.
These libraries can accomplish many other tasks such as tokenization, sentence splitting, and named entity recognition and it would be up to you to combine all of these tools with your domain knowledge and creativity to come up with a solution that works for your case.
My current understanding is that it's possible to extract entities from a text document using toolkits such as OpenNLP, Stanford NLP.
However, is there a way to find relationships between these entities?
For example consider the following text :
"As some of you may know, I spent last week at CERN, the European high-energy physics laboratory where the famous Higgs boson was discovered last July. Every time I go to CERN I feel a deep sense of reverence. Apart from quick visits over the years, I was there for three months in the late 1990s as a visiting scientist, doing work on early Universe physics, trying to figure out how to connect the Universe we see today with what may have happened in its infancy."
Entities: I (author), CERN, Higgs boson
Relationships :
- I "visited" CERN
- CERN "discovered" Higgs boson
Thanks.
Yes absolutely. This is called Relation Extraction. Stanford has developed several useful tools for working on this problem.
Here is there website: http://deepdive.stanford.edu/relation_extraction
Here is the github repository: https://github.com/philipperemy/Stanford-OpenIE-Python
In general here is how the process works.
results = entract_entity_relations("Barack Obama was born in Hawaii.")
print(results)
# [['Barack Obama','was born in', 'Hawaii']]
Of some importance is that only triples are extracted of the form (subject,predicate,object).
You can extract verbs with their dependants using Stanford Parser, for example. E.g., you might get "dependency chains" like
"I :: spent :: at :: CERN".
It is a much tougher task to recognise that "I spent at CERN" and "I visited CERN" and "CERN hosted my visit" (etc) denote the same kind of event. Going into how this can be done is beyond the scope of an SO question, but you can read up literature of paraphrases recognition (here is one overview paper). There is also a related question on SO.
Once you can cluster similar chains, you'd need to find a way to label them. You could simply choose the verb of the most common chain in a cluster.
If, however, you have a pre-defined set of relation types you want to extract and lots of texts manually annotated for these relations, then the approach could be very different, e.g., using machine learning to learn how to recognize a relation type based on annotated data.
Don't know if you're still interested but CoreNLP added a new annotator called OpenIE (Open Information Extraction), which should accomplish what you're looking for. Check it out: OpenIE
Similar to the Stanford parser, you can also use the Google Language API, where you send a string and get a dependency tree response.
You can test this API first to see if it works well with your corpus: https://cloud.google.com/natural-language/
The outcome here is a subject predicate object (SPO) triplet, where your predicate describes the relationship. You'll need to traverse the dependency graph and write a script to parse out the triplet.
There are many ways to do relation extraction. As colleagues mentioned that you have to know about NER and coreference resolution. Different techniques require different approaches. Nowadays, Distant Supervision is most common, and for detecting the relation between entities, they used FREEBASE.