with Dialogflow (API.AI) I find the problem that names from vessel are not well matched when the input comes from google home.
It seems as the speech to text engine completly ignore them and just does the speech to text based on dictionary so Dialogflow cant match the resulting text all at the end.
Is it really like that or is there some way to improve?
Thanks and
Best regards
I'd recommend look at Dialogflow's training feature to identify where the speech recognition of the Google Assistant may not have worked they way you expect. In those cases, you'll see how Google's speech recognition detected words you may not have accounted for. In cases where you'd like to match these unrecognized words to a entity value, simply add them as synonyms.
Related
Please tell me how I can change the stress in some words in the Azure voice engine text-to-speech. I use Russian voices. I am not working through SSML.
When I send a text for processing, then in some words he puts the stress on the wrong syllable or letter.
I know that some voice engines use special characters like + or 'in front of a stressed vowel. I have not found such an option here
To specify the stress for individual words you can use the SpeakSsmlAsync method and pass a lexicon url or you can directly specify it directly in the ssml by using the phoneme-element. In both cases you can use IPA.
I am using azure search in my bot application.
In this if we give input with spelling mistake, for small words like trvel => travel we are getting response properly.
But if i enter "travelexpense" for this i am not getting any result.
Currently i am passing input to do fuzzy search.
I have suggested to use Bing Spell Check API, but it is not approved as they think our input may be stored outside.
Is there any option available in azure search to correct the words like "travelexpense".
Is there any option available for this scenario?
The closest I would say is a phonetic Analyzer.
https://learn.microsoft.com/en-us/azure/search/index-add-custom-analyzers
There a couple of other things you can try:
Enable Auto Complete and Suggestions (https://learn.microsoft.com/en-us/azure/search/search-autocomplete-tutorial)
Create synonyms (https://learn.microsoft.com/en-us/azure/search/search-synonyms)
I'm trying to make a simple bot with Dialog flow to remind me to update my calendar with what I did during the day.
I want it to go something like this:
Bot: Hey, what did you do from 2pm-5pm today?
User: I did jogging from 2pm-3pm
Bot: Added "Jogging" to your calendar from 2pm-3pm. What about from 3pm-5pm?
User: I did reading.
Bot: Added "reading" from 3pm-5pm to your calendar.
My question is, how do I extract the activity (such as jogging or reading) as it can be literally anything. I guess I need to identify the "I did" part and see what it is after that and before "from 2-pm-3pm" part. I have an idea how to do this with Python, but I'm wondering if it's possible using DialogFlow?
Any help is greatly appreciated, thank you
You would use the #sys.any entity type and assign it to that part of the training phrases that you're setting up in Dialogflow.
As you're setting up the training phrases, keep in mind that there may be many ways to say the same sort of thing, which is why using Dialogflow's training phrases are better than trying to capture parameters using string parsing.
So perhaps you want something like this
The bot which I have created within Dialogflow is using a webhook to link to our external site.
One of the intents we have for the bot is to search for knowledge
within the site. Originally, we had in the Request Knowledge intent,
a phrase which was a #sys.any parameter, which would then be the
search term.
However, because the whole phrase was a #sys.any parameter, this
would be prioritised over most other intents.
We are trying to get users to use natural language when using the
bot, however people still do just type in one word or a phrase for
the search function.
What I would like if possible is to have a fallback intent which is
the search function. So if the bot cannot successfully match the one
word, it would then run a search for this word.
I am not sure if this would fix this problem or just produce more issues.
If anyone has solved something similar to this, I would greatly appreciate the help. Sorry if this is simple to do, I am all new to the whole Dialogflow world!
You can turn fulfillment on for Fallback Intents, and these will be sent to your webhook. The JSON includes the full text of what was entered.
However... the results will clearly be less useful since some of the results will be text that is conversational, but didn't get picked up by one of the other Intents.
I capture an audio from a speaker where they say - "I want to meet John Disilva". I pass this to Google Speech API with Phrase as { 'John Disilva', 'Ashish Mundra'}. However, Google Speech API returns me full phrase i.e. - 'I want to meet John Disilva'.
Is there a way I can only get my phrase as return value as I am only interested to extract the name part?
The reason is that I cannot control what someone is saying to my mic. They can say 'I would like to see John Disilva' or 'Do you know John Disilva', but I am sure that my user will always have that name somewhere in this sentence which I want to extract.
If Google Speech API can give me the exact phrase via which it was able to detect John Disilva in that sentence then I can use that Phrase for further processing in my code.
This isn't possible with the Google Speech API. Your best bet may be to just do post-processing to see which name is present. If you need something more accurate than that look for an ASR system that supports "keyword spotting."