The bot which I have created within Dialogflow is using a webhook to link to our external site.
One of the intents we have for the bot is to search for knowledge
within the site. Originally, we had in the Request Knowledge intent,
a phrase which was a #sys.any parameter, which would then be the
search term.
However, because the whole phrase was a #sys.any parameter, this
would be prioritised over most other intents.
We are trying to get users to use natural language when using the
bot, however people still do just type in one word or a phrase for
the search function.
What I would like if possible is to have a fallback intent which is
the search function. So if the bot cannot successfully match the one
word, it would then run a search for this word.
I am not sure if this would fix this problem or just produce more issues.
If anyone has solved something similar to this, I would greatly appreciate the help. Sorry if this is simple to do, I am all new to the whole Dialogflow world!
You can turn fulfillment on for Fallback Intents, and these will be sent to your webhook. The JSON includes the full text of what was entered.
However... the results will clearly be less useful since some of the results will be text that is conversational, but didn't get picked up by one of the other Intents.
Related
I am using actions-on-google and Dialogflow to build a social robot for Elderly.
I was wondering how I can easily repeat the last sentence when asked by the user ("repeat please") as often the Senior doesn't hear the sentence the first time.
One way would be to have repeat followup intents in Dialogflow but this is quite heavy since :
you need to add one after each intent and I have many
in a multi-user environment you need to keep track of the last sentence for every user ...
Another way would be be to take advantage of Dialogflow Contexts. As you send the message, you can also add that message to a context (for example, you can call it "last_message"). You can then have another Intent that takes as an input context the "last_message" context and, if triggered, uses the value saved in the context to repeat it.
However, I still have the problem that I need to add a context to every intent I have, which are many.
Does anyone know how to accomplish this in a quicker way? I found this package but it is in JS and I need it Python: https://github.com/SysCoder/VoiceRepeater/pulls .
How do I implement this VoiceRepeater library? Do I put the code under fulfillment function 'repeat' I have made and mapped to an intent called 'repeat' that I have made which responds to utterances such as 'Sorry, could you repeat that'? Also, where do I install the VoiceRepeater library (code: npm install voice-repeater --save)?
Using Followup Intents is probably the wrong way to do this. As you note, it is way too heavy for more than a few Intents. It may be useful in certain circumstances if you want the "repeated" message to clarify the response in a different way, but in general, it isn't very useful. (It should also be noted that Followup Intents use Contexts, but in a different way than discussed below.)
You don't need to add the Context to the UI as part of the Outgoing Context - you would set this as part of your Fulfillment. It would include a parameter that either contained exactly what you said, or the information you needed to recreate what you said (possibly in a different form, if appropriate). In your "repeat" Intent, you'd read the value that you had saved in this Context, and send it as the output again. If you're using SSML, you may wish to change the speed or volume, if that is appropriate.
Update based on new questions
The readme for VoiceRepeater has the basics of what you need to do to use it, but it does assume a little familiarity with Node. But in general, yes, you install it the way you describe, setup an Intent that captures requests to repeat, and registers a handler function (repeatLastStatement(app) in the readme) that handles the Intent to send a reply through voiceRepeater.lastPromptWithPrefix().
It also may assume you're using version 1 of the actions-on-google library. I haven't dug too deeply into the code, but it looks like it replaces the library's ask function with its own, and I'm not sure how well that works with version 2 of the actions-on-google library.
Unlike Voice Repeater, multivocal doesn't require you to register handlers specifically since it tries to hide as much boilerplate under the covers. You just need to define the replies that you might want it to use. It uses the Context scheme I outline above to store responses and make them available when the user asks for it to be repeated.
There aren't any videos on using multivocal, but the simple example does include the configuration illustrating how to configure responses for the "multivocal.repeat" Intent. While VoiceRepeater works with the actions-on-google library, multivocal is a complete replacement, offering a more template-based approach to building fulfillment.
However, neither of these directly help you if you want to implement it for Python. But if you look at the source for VoiceRepeater, you can get a sense for how to implement it yourself in Python.
The key bit is on line 47 where it saves the reply in a context. (It also saves the reply with a prefix message.) It then calls the original function that would send the reply:
app.setContext("last_prompt", 100,
{
"last_prompt": textToSpeech,
"prefixed_last_prompt": repeatPrefix + lastStatement,
});
originalAsk(response);
Later, in the call to lastPromptWithPrefix(), it uses the contents of the Context to send a reply.
lastPromptWithPrefix() {
return this.app.getContext("last_prompt") !== null
? this.app.getContextArgument("last_prompt", "prefixed_last_prompt").value
: "um....I don't remember what I said!";
}
I'm trying to make a simple bot with Dialog flow to remind me to update my calendar with what I did during the day.
I want it to go something like this:
Bot: Hey, what did you do from 2pm-5pm today?
User: I did jogging from 2pm-3pm
Bot: Added "Jogging" to your calendar from 2pm-3pm. What about from 3pm-5pm?
User: I did reading.
Bot: Added "reading" from 3pm-5pm to your calendar.
My question is, how do I extract the activity (such as jogging or reading) as it can be literally anything. I guess I need to identify the "I did" part and see what it is after that and before "from 2-pm-3pm" part. I have an idea how to do this with Python, but I'm wondering if it's possible using DialogFlow?
Any help is greatly appreciated, thank you
You would use the #sys.any entity type and assign it to that part of the training phrases that you're setting up in Dialogflow.
As you're setting up the training phrases, keep in mind that there may be many ways to say the same sort of thing, which is why using Dialogflow's training phrases are better than trying to capture parameters using string parsing.
So perhaps you want something like this
I'm currently trying to use the built in entity '#sys.zip-code' from DialogFlow (formerly API.AI) for capturing Zip Codes. However so far it does not seem to recognize any actual zipcodes except those which I explicitly set through training. It also does not recognize the '5 digit' pattern as a possible match if #sys.phone-numbers is used in another intent (ex: 54545 gets recognized as a phone number, rather than a zip).
Should I upload a list of known zipcodes through the training section to get this working? Or is there something I'm missing from the built in functionality? Haven't seen a ton of info online on how to best utilize this entity, so figured I'd ask here before coming up with a custom solution.
Thanks in advance!
I think the best way to prompt a user when the bot says something like "could I get your name and zip code? ".The intent which i have created contains multiple combinations of “User says”.They are as below
#"#sys.given-name #sys.zip-code"
#"#sys.zip-code #sys.given-name"
#"#sys.given-name"
#"#sys.zip-code"
and I also have required Parameters setup to pick these values with prompt messages.
So I have attached a picture for this which i have iterated
with Dialogflow (API.AI) I find the problem that names from vessel are not well matched when the input comes from google home.
It seems as the speech to text engine completly ignore them and just does the speech to text based on dictionary so Dialogflow cant match the resulting text all at the end.
Is it really like that or is there some way to improve?
Thanks and
Best regards
I'd recommend look at Dialogflow's training feature to identify where the speech recognition of the Google Assistant may not have worked they way you expect. In those cases, you'll see how Google's speech recognition detected words you may not have accounted for. In cases where you'd like to match these unrecognized words to a entity value, simply add them as synonyms.
What's wrong with Wit.ai ? My bot understand few numbers as location and it breaks my stories. You can see the picture below :
What can I Do for that ? Thank you.
If you earlier have validated some GPS coordinates on your Understanding console, this type of misprediction may be possible. For avoiding that, you can validate some useful numbers with the wit/numbers intent, other GPS coordinates should be validated with the wit/location.
Also you may accidentally validated some numbers using the wit/location entity, feed some numbers with the wit/numbers entity. wit.ai does not know anything about numbers, locations, etc, without you validated them first.. Try to write " Amsterdam " on your Understanding tab, you'll see that wit.ai cannot assign this text to any intent or location entity because you have not trained his modal yet :) Validate it with wit/location. After that he will know..
Also you can train(validate or feed) your own wit.ai NLP without the Understanding tab. You may use simple CURL command and a loop.
Check this out:
https://wit.ai/docs/http/20160526
Have a nice day :)