How to pass application specific phrases to google-nlp api? - google-cloud-nl

I'm using Google NLP for executing voice commands in our application.
Scenario:
Input text: "Generate Customer Profitability Report"
Now that 'Customer Profitability' is a single entity from application perspective, Is there a way to pass set of suggestive phrases [in this case 'Customer Profitability'] to NLP api and NLP will treat it as a single phrase and respond ?
This is possible in speech api, where i can pass suggestive phrases.
Any pointers on these lines are much appreciated!

Related

How we can extract parts of text about a topic?

I want to do a Topic modelling but in my case : One article may contains many topic:
I have an article (word file) that contains several topics and each topic is associated with a company (see example below)
I have a text as input :
"IBM is an international company specializing in all that is IT, on the other hand Facebook is a social network and Google is a search engine. IBM invented a very powerful computer."
Knowing we have labeled topics : "Products and services","Communications","Products and services"...
I want to have as output:
IBM : Products and services
Facebook : Communications
Google : Products and services
So, I think that we can do this by splitting the text: associate the parts of the text that talks about company, for example :
IBM : ['IBM is an international company specializing in all that is IT', 'IBM invented a very powerful computer.']
Facebook : ['Facebook is a social network']
Google : ['Google is a search engine']
then, for each company, perform Topic Modelling based on parts of text for each company ...
OUTPUT:
IBM : Products and services
Facebook : Communications
Google : Products and services
Could you help me how I can split and match the parts of text to each company, how to determine the parts that talk about Facebook in
It seems like you have two separate problems: (1) Data preparation/cleaning, i.e. splitting your text into the right units for analysis; (2) classifying the different units of text into "topics".
1. Data Preparation
An 'easy' way of doing this would be splitting your text into sentences and use sentences as your unit of analysis. Spacy is good for this for example (see e.g. this answer here). Your example is more difficult since you want to split sentences even further, so you would have to come up with a custom logic for splitting your text according to specific patterns, e.g. using regular expressions. I don't think that there is a standard way for doing this and is depends very much on your data.
2. Topic classification
If I understand correctly, you already have the labels ("topics" like ["Products and services", "Communications"]) which you want to attribute to different texts.
In this case, topic modeling is probably not the right tool, because topic modeling is mostly used when you want to discover new topics and don't know the topics/labels yet. And in any case, a topic model would only return the most frequent/exclusive words associated to a topic and not a neat abstract topic label like "Products and services". You also need enough text for a topic model to produce meaningful output.
A more elegant solution is zero-shot classification. This basically means that you take a general machine learning model that has been pre-trained by someone else in a very general way for text classification and you simply apply it to your specific use case for "topic classification" without having to train/fine-tune it. The Transformers library has a very easy to use implementation of this.
# pip install transformers==3.1.0 # pip install in terminal
from transformers import pipeline
classifier = pipeline("zero-shot-classification")
sequence1 = "IBM is an international company specializing in all that is IT"
sequence2 = "Facebook is a social network."
sequence3 = "Google is a search engine. "
candidate_labels = ["Products and services", "Communications"]
classifier(sequence1, candidate_labels)
# output: {'labels': ['Products and services', 'Communications'], 'scores': [0.8678683042526245, 0.1321316659450531]}
classifier(sequence2, candidate_labels)
# output: {'labels': ['Communications', 'Products and services'], 'scores': [0.525628387928009, 0.47437164187431335]}
classifier(sequence3, candidate_labels)
# output: {'labels': ['Products and services', 'Communications'], 'scores': [0.5514479279518127, 0.44855210185050964]}
=> it classifies all texts correctly based on your example and labels. The label ("topic") with the highest score is the one which the model thinks fits best to your text. Note that you have to think hard about which labels are the most suitable. In your example, I wouldn't even be sure as a human which one fits better and the model is also not very sure. With this zero-shot classification approach you can chose the topic labels that you find most adequate.
Here is an interactive web application to see what it does without coding. Here is a Jupyter notebook which demonstrates how to use it in Python. You can just copy-paste code from the notebook.

Word tolerance of training phrases in Dialogflow (- to create a Google Action)

I have an important question, at the moment i am writing my last essay before starting with my bachelor thesis. It is about voice apps, which includes the google actions for sure.
But i need some informations about the word tolerance of the training phrases. And I was not able to find some information on the internet yet. Does Google only recognize the training phrases typed in by the developer or can Google add some phrases by time or with training (so that the user can say different phrases to trigger an intent which were not typed in from the developer in the beginning) ?
It is really important for my essay. So I would be very happy if you can help me with this question.
I wish you a nice weekend!
Dialogflow uses the training phrases to build a machine-learning algorithm to match similar phrases that aren't exactly what you enter.
For example, the training phrase "I want pizza" trains your agent to recognize end-user expressions that are similar to that phrase, like "Get a pizza" or "Order pizza".

Is there anyway to make google assistant's speech recognition better recognise words used in my dialogflow agent?

I am using Dialogflow to create a chatbot that can be used on google assistant. However the speech recognition often mis-recognizes the intended word. Example, when I say the word "seal", it recognizes the spoken word wrongly as "shield".
Is there any way to "train" or make google assistant better recognize a word?
If you have a limited amount of words that you would like to improve upon, then using Dialogflow's entities would be an option. For instance, if you are trying to recognize certain animals. You can create a set of animals as entities and set the intent to look for an animal entity in the user input.
Besides this option I don't know of any other things to improve the speech itself, you could train Dialogflow to map both "seal" and "shield" to your desired intent, but that doesn't change the actual word, it will still be shield.
For any other improvements to the speech recognition, I'm afraid you will have to wait for updates from Google to their algorithms.
Just found out there is a new beta function in dialogflow that should help.
https://cloud.google.com/dialogflow/docs/speech-adaptation
Edit:
However does not work with Actions on google.

User custom input value based decision tree implementation using dialogflow

I need following feature in the flow.
based on users input like gold , silver bot should direct him to those particular credit card flows.
How to enable loop in flows.
How to perform 4 to 5 steps long guided flow which is a big complex tree.
I have gone through documentations, read about input and output context not of much help but could not get any help in providing hops in the conversation flow as mentioned in the diagram.
Tried using Dialogflow only
I am not able to navigate between the flows.
I would suggest to build a basic Action to get familiar with the concepts, check out this codelab
If I understand correctly, you want to ask the user:
"...which one should I tell you about?"
and then the user can say "Silver", "Gold" or "Platinum".
First try to just implement this simple step. Create 4 intents in Dialogflow.
Welcome intent, the response should be "...which one should I tell you about?"
Silver Intent. Training phrase should be "Silver", response should be "You chose Silver"
Gold Intent. Training phrase should be "Gold", response should be "You chose Gold"
Platinum Intent. Training phrase should be "Platinum", response should be "You chose Platinum"
Once you've done that. Test it! It should trigger the correct intent based on your input. It's very simple to build a "switch" from a flow chart in Dialogflow.
Next step: You can replace Silver/Gold/Platinum with a custom entity, read more about this here.
This should already help you implement your flow chart.
In your chart you have currently just one answer for each card type Silver/Gold/Platinum - if you want more than one step per card type and need to remember you're still in the context of the Silver card - you can use contexts. In Dialogflow you can hover over the Silver intent you created earlier and create a follow-up intent. But with your current flow chart it's not necessary.

Can we use LUIS and Text Analytics API together

i need to develop a chat bot on azure for user interaction i have used LUIS and now want the bot to perform analyze the chat and suggest user the necessary changes. So, should i use text analytic API for it and does LUIS and text analytic API can be used together?
Text analytics can determine sentiments, extract key phrases and detect the language used. If you want to find the intent of the user or extract entities from a text, you can use LUIS.
For "The hotel is the worst ever" a sentiment analysis can tell that the sentiment is negative. For the same sentence key phrase extraction extracts the key words/phrases: "hotel, worst", without any interpretation of the meaning or context.
For "Turn on the yellow light", LUIS can be trained to extract intent (Operate Light) and entities (Action: turn on, Object: Yellow Light) with a meaning and a context.
Text Analytics and LUIS expose separate APIs that just takes texts as input, so they can be used independently of each other. They have no integrations built in between them, so that's up to the consumer to implement.

Resources