I have created a simple conversational flow in Dialogflow that accepts various questions and speaks pre-programmed replies, all defined in a series of intents. There are no external hooks etc.
When used on a screen based device (eg. mobile phone) I want to display more text than that which is spoken. (displayText) eg:
User: "What colour is the sky?"
Bot: "Blue" (spoken and displayed on screen). "At night it is black". (Additional information displayed on screen only.)
I want to do the same for each intent.
What is the simplest way of achieving that please? I would prefer to keep most of it in Dialogflow and to write the minimum amount of code possible.
It is ok, I found the solution thanks. In Dialogflow intents under Response there are two tabs, Default and Google Assistant. Under Google Assistant there is an option Customise audio output. When you select that you get two input fields, one for text and one for speech.
So to use the above example under intent training phrase I entered "What colour is the sky?"
Under Default Response I entered "Blue"
Under Google Assistant response, Text Output field I entered: "Blue. At night it is black."
Under Google Assistant response, Speech Output field I entered: "Blue".
It works perfectly in both Google Home (voice only) and Assistant on mobile phone (Speaks "Blue" but displays "Blue. At night it is black.")
It doesn't even seem necessary to enter anything in Default Response. It works fine on Google Home and Assistant on the phone without it. Not sure about other platforms though.
Related
I am asking my agent for my wifi password which is for example 9876, then dialogflow responds by saying 'nine thousand eight hundred seventy six' rather than telling 'nine eight seven six(simple number format)'. I tried with spaces between numbers which is working fine but how can i achieve it without including spaces between numbers?
If you want to change the default responses given by Dialogflow you can have a look into SSML. With SSML you can modify complete or parts of the response of your bot.
In your case you should have a look into the say-as property to change just the number output.
To get the result that you want using the code "example 9876", your ssml string should look like this:
<speak>
example <say-as interpret-as="verbatim">9876</say-as>
</speak>
This will translate into: "Example nine eight seven six".
If you are using Actions on Google, you can play around with SSML in the simulator under the audio tab and test what certain SSML tags fit your desired result.
Please excuse me if this is a really basic question - I'm very much still learning, and I just cannot find a solution.
I'm trying to use the standard basic text responses in Dialogflow, which from what I understand, should work.
What I want to do, is have a set statement (Okay, let's see what I can find), then a random pick from a list, then another set statement, essentially stacking the responses in Dialogflow (see screenshot).
It works absolutely fine in Dialogflow's test console - however, it doesn't do what I want when I take it into the Google action simulator.
Have I made a stupid error, missed a toggle switch somewhere, or am I trying to do something unsupported?
To surface text responses defined in Dialogflow's default response tab go to the Google Assistant response tab and turn on the switch that says "Use response from the DEFAULT tab as the first response.":
Hi i'm having facing a problem that i have selected phone surface where returning both simple response and list card.But in display i'm getting both in simulator.How to remove the simple response when using or displaying list card.
This requirement is for both google home mini and assistant in phone.
Here i need to clear that their is no request from user by clicking
list card.It is only meant for display purpose.
whether my way of implementation is wrong or not don't know correct me if i am wrong.But is it possible to remove or any other way to get rid of the simple response?
Keep in mind that you must have at least one SimpleResponse, in addition to any other RichResponses you may send. This SimpleResponse can contain a blank space - but it must exist. (It should, however, probably include more than a blank space.)
Use following code to detect the Surface
const screenAvailable = conv.available.surfaces.capabilities.has('actions.capability.SCREEN_OUTPUT');
If Surface present, only use UI based response.
If Surface is not present, only use simple response.
Test on a real mobile device and home as simulator shows extra information during simulation.
with Dialogflow (API.AI) I find the problem that names from vessel are not well matched when the input comes from google home.
It seems as the speech to text engine completly ignore them and just does the speech to text based on dictionary so Dialogflow cant match the resulting text all at the end.
Is it really like that or is there some way to improve?
Thanks and
Best regards
I'd recommend look at Dialogflow's training feature to identify where the speech recognition of the Google Assistant may not have worked they way you expect. In those cases, you'll see how Google's speech recognition detected words you may not have accounted for. In cases where you'd like to match these unrecognized words to a entity value, simply add them as synonyms.
I have made a bot using QnA Maker and Node JS which is running on Skype.
When the user inputs a word which has got multiple matches in FAQ link or document uploaded in QnA Maker, it shows button of choice using QnAMakerTool module from Node. My question is when the multiple matches has same initial words then because of the size of the choice button in Skype the half of the texts get hide. For example, I have three matches like
Whom should I contact for parking?
Whom should I contact for canteen?
Whom should I contact for Stationery?
It shows in Skype as
Whom should I contact for...
Whom should I contact for...
Whom should I contact for...
If the option text is too long then few parts get hidden. What can I do for this?
First of all, there is a limitation on the max characters in Skype, so that's something you will have to live with. However, you can implement some custom logic to change the text being shown.
That current logic that you are seeing is on the QnAMakerTools file.
The way to go here is probably providing your own QnAMakerTools implementation (it needs to follow this interface).
The QnAMakerDialog receives an IQnAMakerOptions parameter. One of the properties of that interface is feedbackLib which basically is the QnAMakerTools instance that the dialog will later use to disambiguate the question as you can see here.