Can different entities in dialogflow have the same value? - dialogflow-es

I am trying to understand how to structure intents when entities contains the same strings as value.
I imagine that when adding functionalities this will become a mess;
What is the correct approach to handle this "mixed" content?
example:
Entity 1: content
word document(s)
html page(s)
video(s)
Entity 2: content-specifier
video(s)
image(s)
car(s)
Example 1: show me all [html pages] with [videos]
the expectation is to have
#content => "html page"
#content-specifier => "video"
Example 2: show me all [videos] with [cars]
the expectation is to have
#content => "video"
#content-specifier => "car"

I believe that at the beginning you're going to get a lot of false positive matches. After training and adding a lot of training phrases there it should be ok though. To help it match better the options use also a template:
3 things to note here:
make sure you correct at your intent any values it misclassified
go at the training option frequently and make sure that the entities are correctly recognize. Make any necessary changes
Create a template for the user input. To do so, in your intent at the training phrases, click on the quotes. It will change to an "at" symbol (#). Then add the expected format of your user's input

Related

Does Docusign offer 'placeholder' support for Text Tags?

I would like to add some placeholder text to assist the signer with what type of data they should populate. Although a nearby label is the main source of assistance/guidance, I was hoping to add some placeholder text as well. Looking through the documentation for Text I do not see such such an attribute. So if the Text class does not support 'placeholder text', is the next best thing the usage of tooltip argument?
Goal:
There's not exact equivalent feature. You can do these things, but none is exactly what you asked about:
You can set the initial value of the text tab. This would not require the user to modify it and would not immediately be removed when they focus on that tab.
You can use a tooltip like you mentioned.
You can use a label like you mentioned.
In addition, you can use various validation rules to ensure user filled the tab correctly.

Prevent direct selection from NL training

I matched the concept in selection-of in NL training so that it could now accept the input and display result when i voice-input the option. But the downside of this is it will directly show the result even though i am not in the selection page. Is there any way to prevent this? My approach is match goal of NL with the concept of selection option, with at prompt for the concept
[Update] Would like to show the menu to the user first before they make their selection. The menu can be shown when I running an action
Where playNews> getNews> getMenu (a selection-of input view)
action (getNews) {
type (Constructor)
description (__DESCRIPTION__)
collect {
input (whatuserwant){
type (userWantToHear)
min (Required) max (One)
default-init{
intent{
goal: getMenu
}
}
default-select{
with-rule{
select-first
}
}
}
}
output (newsAudio)
}
To allow voice-input selection, i add training for the concept
So it is able to select selection from the menu, but it also will accept the selection and run even though i am not in the menu yet. Is it possible to get rid of this? Or is this the behaviour of Bixby?
Update: I would remove default-select and add prompt-behavior (AlwaysElicitation) to the input. Read more in our DOC.
You may also want to check this example in GitHub to see how to construct an input view selection from other input of the action. This example is a simplified version of how QuizIt handles the selection part. You may also want to check the training to see how Bixby would take different action with/without the top-level "A" training example.
The input prompt should be easy and simple as you expect: present a list with a message, then user can either tap, or voice select, and able to continue the action.
Here are some additional info you might find useful:
Bixby platform would try to match every property of a struct when an input is missing, so mark the property as visibility (Private) to prevent that. You can also use prompt-behavior (AlwaysSelection) to force a selection of an input.
In case of prompt/continuation training, Bixby would treat it as top-level training is no other training could fit. For example a simple quiz capsule that would construct a default quiz, top level utterance "A" would be treated as if answering the first question as A. To prevent this, just add a training example of "A" and match it to the action you want. Bixby would know to use this top-level training instead of the prompt training.

Exact match on training phrase intent Dialogflow

The Training phrase only contains How to order but when I only type "order" it still shows or reply to me.
What can I do to make it reply to only very specific words?
image
If I didn't explain my problem well enough, here's a link on a guy that has the same problem and has better english compared to mine
https://www.reddit.com/r/Dialogflow/comments/dmy5x6/exact_match/
Click the option button (three dotted button) beside the save button inside the intent.
Click on 'disable ML'.
If Ml is disabled, The Intent will follow Rule-based grammar matching algorithm which means it will only match user expressions with the exact training phrases defined in the intent.
Add the keyword 'order' in the default fallback intent's training phrase and you should be fine. OR
You can also create a new Intent that handles only a specific keyword like 'order' in your case and add a quick reply saying did you mean how to order? Yes & No
Hope this helps :) If you have some queries do drop a comment.

Indexing Bixby Spoken-Summary

I received help on getting Bixby to read the list in the view, but now I am trying to have it be useful for Hands-Free List Navigation. Is there a way use indexing in Spoken-Summary? Currently it just reads each item in the list, but it will be difficult to use ordinal selection without indexing.
Indeed where-each does not have child-key index-var. You can add an index property in the concept structure and add index value in your JS file as a workaround.
However, I would think the speech itself should be sufficient in selecting content. In case of read-one Bixby will pause after each item waiting for "next" or "yes". In case of read-many developer can set page size.
You can (maybe should) implement your capsule that it takes the content rather than number.
For example: "watch news" --> "which of the following channel would you like" --> "BBC, NBC, CNN, FOX", at the input prompt, rather than auto answer machine, "press 1 for BBC, press 2 for NBC... ", user should be able to say "CNN" or "CNN News" and it will match as an input.
In case you really need index, the current possible work around is to add index as part of the structure in your JS function returning the list. It should not be hard. The voice command of "first", "second"... "last" are built-in features and should work.
You can also go to developer center and make a feature request to add index-var to where-each, but it will be a PM decision to approve or when to implement such feature.

Google prediction API - Training data syntax for multi classification

Trying to harness the power of Google Prediction API, to classify my data. Each item in my DB can have multi categories assign to it.
For example: "My Nexus phone is rebooting constantly" could be assigned both #Android and #troubleshooting tags.
I would like to upload my training data to Google, but I'm not sure how to apply both tags to the same content. In the following example I've found the syntax that provide one category for each content like so:
"Android" ,"My Nexus phone is rebooting constantly"
What is the right syntax for multi-classification training data?
Unless I'm misunderstanding something from your question, I think the answer to it is in the docs here.
Namely, the section about text strings explains that when you submit a text string, the system actually cuts it into multiple strings, separating everything using whitespaces as a delimiter. They point out to "Godzilla vs Mothra" to be "Godzilla", "vs", and "Mothra". So in your case, you could just use "Android troubleshooting". The system will separate it in "Android" and "troubleshooting".
From the docs:
Each line can only have one label assigned, but you can apply multiple labels to one example by repeating an example and applying different labels to each one. For example:
"excited", "OMG! Just had a fabulous day!"
"annoying", "OMG! Just had a fabulous day!"
If you send a tweet to this model, you might get a classification something like this: "excited":0.6, "annoying":0.2.

Resources