How To Pick Out Activity inbetween phrases in DialogFlow - dialogflow-es

I'm trying to make a simple bot with Dialog flow to remind me to update my calendar with what I did during the day.
I want it to go something like this:
Bot: Hey, what did you do from 2pm-5pm today?
User: I did jogging from 2pm-3pm
Bot: Added "Jogging" to your calendar from 2pm-3pm. What about from 3pm-5pm?
User: I did reading.
Bot: Added "reading" from 3pm-5pm to your calendar.
My question is, how do I extract the activity (such as jogging or reading) as it can be literally anything. I guess I need to identify the "I did" part and see what it is after that and before "from 2-pm-3pm" part. I have an idea how to do this with Python, but I'm wondering if it's possible using DialogFlow?
Any help is greatly appreciated, thank you

You would use the #sys.any entity type and assign it to that part of the training phrases that you're setting up in Dialogflow.
As you're setting up the training phrases, keep in mind that there may be many ways to say the same sort of thing, which is why using Dialogflow's training phrases are better than trying to capture parameters using string parsing.
So perhaps you want something like this

Related

Fallback intent as a search

The bot which I have created within Dialogflow is using a webhook to link to our external site.
One of the intents we have for the bot is to search for knowledge
within the site. Originally, we had in the Request Knowledge intent,
a phrase which was a #sys.any parameter, which would then be the
search term.
However, because the whole phrase was a #sys.any parameter, this
would be prioritised over most other intents.
We are trying to get users to use natural language when using the
bot, however people still do just type in one word or a phrase for
the search function.
What I would like if possible is to have a fallback intent which is
the search function. So if the bot cannot successfully match the one
word, it would then run a search for this word.
I am not sure if this would fix this problem or just produce more issues.
If anyone has solved something similar to this, I would greatly appreciate the help. Sorry if this is simple to do, I am all new to the whole Dialogflow world!
You can turn fulfillment on for Fallback Intents, and these will be sent to your webhook. The JSON includes the full text of what was entered.
However... the results will clearly be less useful since some of the results will be text that is conversational, but didn't get picked up by one of the other Intents.

How to correctly utilize Zip Code entity for DialogFlow?

I'm currently trying to use the built in entity '#sys.zip-code' from DialogFlow (formerly API.AI) for capturing Zip Codes. However so far it does not seem to recognize any actual zipcodes except those which I explicitly set through training. It also does not recognize the '5 digit' pattern as a possible match if #sys.phone-numbers is used in another intent (ex: 54545 gets recognized as a phone number, rather than a zip).
Should I upload a list of known zipcodes through the training section to get this working? Or is there something I'm missing from the built in functionality? Haven't seen a ton of info online on how to best utilize this entity, so figured I'd ask here before coming up with a custom solution.
Thanks in advance!
I think the best way to prompt a user when the bot says something like "could I get your name and zip code? ".The intent which i have created contains multiple combinations of “User says”.They are as below
#"#sys.given-name #sys.zip-code"
#"#sys.zip-code #sys.given-name"
#"#sys.given-name"
#"#sys.zip-code"
and I also have required Parameters setup to pick these values with prompt messages.
So I have attached a picture for this which i have iterated

Google Home -> Dialogflow entity matching very bad? for non dictonary enities

with Dialogflow (API.AI) I find the problem that names from vessel are not well matched when the input comes from google home.
It seems as the speech to text engine completly ignore them and just does the speech to text based on dictionary so Dialogflow cant match the resulting text all at the end.
Is it really like that or is there some way to improve?
Thanks and
Best regards
I'd recommend look at Dialogflow's training feature to identify where the speech recognition of the Google Assistant may not have worked they way you expect. In those cases, you'll see how Google's speech recognition detected words you may not have accounted for. In cases where you'd like to match these unrecognized words to a entity value, simply add them as synonyms.

Wit.ai bot understand wit/number as wit/location

What's wrong with Wit.ai ? My bot understand few numbers as location and it breaks my stories. You can see the picture below :
What can I Do for that ? Thank you.
If you earlier have validated some GPS coordinates on your Understanding console, this type of misprediction may be possible. For avoiding that, you can validate some useful numbers with the wit/numbers intent, other GPS coordinates should be validated with the wit/location.
Also you may accidentally validated some numbers using the wit/location entity, feed some numbers with the wit/numbers entity. wit.ai does not know anything about numbers, locations, etc, without you validated them first.. Try to write " Amsterdam " on your Understanding tab, you'll see that wit.ai cannot assign this text to any intent or location entity because you have not trained his modal yet :) Validate it with wit/location. After that he will know..
Also you can train(validate or feed) your own wit.ai NLP without the Understanding tab. You may use simple CURL command and a loop.
Check this out:
https://wit.ai/docs/http/20160526
Have a nice day :)

Match with Phrase for Google Speech API

I capture an audio from a speaker where they say - "I want to meet John Disilva". I pass this to Google Speech API with Phrase as { 'John Disilva', 'Ashish Mundra'}. However, Google Speech API returns me full phrase i.e. - 'I want to meet John Disilva'.
Is there a way I can only get my phrase as return value as I am only interested to extract the name part?
The reason is that I cannot control what someone is saying to my mic. They can say 'I would like to see John Disilva' or 'Do you know John Disilva', but I am sure that my user will always have that name somewhere in this sentence which I want to extract.
If Google Speech API can give me the exact phrase via which it was able to detect John Disilva in that sentence then I can use that Phrase for further processing in my code.
This isn't possible with the Google Speech API. Your best bet may be to just do post-processing to see which name is present. If you need something more accurate than that look for an ASR system that supports "keyword spotting."

Resources