SSML Actions on Google, change speaking language - dialogflow-es

I can't seem to change the language during the conversation.
I tried:
<speak>
<voice gender="male" variant="3" langaugeCode="fr">
<prosody rate="105%">
Bonjour
</prosody>
</voice>
</speak>
Any ideas how to do that?

The <voice> tag is not officially supported in the SSML for the Google Assistant, although it does appear to partially work.
Although the gender and variant attributes do appear to work, the SSML spec does not define a languageCode attribute (which you ask about in your question), and the languages attribute does not appear to be supported.

Related

how to shout out, scream, cry out or yell at in Alexa Skill?

I would like to create a skill to yell at someone, but i can not find any reference in SSML to yell or scream.
Is it even possible ?
Use audio file for doing that. You can record or download from the internet and use it in ssml audio format. You just have to put your audio url as done in code below.
<speak>
<audio src="soundbank://soundlibrary/transportation/amzn_sfx_car_accelerate_01"/>
</speak>
There's currently no yelling supported. The closest expression you could achieve with SSML is using the custom tag for emotions:
<amazon:emotion name="excited" intensity="medium">Hey, I'm so excited!</amazon:emotion>
The support of emotions varies across locales and I suggest to keep an eye on the dev blog posts to keep track of new possibilities:
https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2020/11/alexa-speaking-styles-emotions-now-available-additional-languages

Text To Speech configuration for actions-on-google

I have an application developed using Dialogflow and actions-on-google framework.
When I provide a response which has numbers in it the text to speech engine pronounces 0 (Zero) as "O" (Oh)
Is there any way where I can configure not to speak 0 (Zero) as "O" (Oh) and should always speak "ZERO"
Please help
You can look the documentation for SSML to provide more specific nuances in the text-to-speech response.
If you want to say specific characters, you should be able to use an SSML say-as tag:
<speak>
<say-as interpret-as="characters">1234567890</say-as>
</speak>
Using sub alias of speak element fixed my issue
<speak>This is test<sub alias="one one zero seven">1107</sub> </speak>

Update pronunciation of words in my Google Action

I have intents with responses done in Dialogflow with fulfillment enabled, and I have integrated with Google Assistant. There is a specific word "FICO" (as in FICO score) where the pronunciation is wrong when the Assistant responds. Is there a way to change the pronunciation of that specific word?
Instead of sending back text which will be used in text-to-speech generation, you can use the SSML <sub> tag to provide an aliased pronunciation for the word in question. So you might try something like this to see how it sounds
<speak>
Your <sub alias="fyeco">FICO</sub> score is
</speak>
or fiddle with it till it sounds the way you want. The part inside the tag will be displayed, while the alias part will be spoken.
The code for this might be something like
const msg = `<speak>Your <sub alias="fyeco">FICO</sub> score is ${score}.</speak>`
conv.add( msg );

Multi-language support in API.AI

I have an existing Dialogflow agent, but now want to add a new language. I have figured out how to add new intents for other languages in the GUI, but its not clear how the fulfillment logic needs to change for each language.
How do I use the locale information to respond for each language intent?
When you add a language, there will be another pill shape under your bots name... Mine started in English (en), and I added Spanish (es). First, I do my thing in english:
Then, I click the (es) pill and add examples and responses there:

Google Home -> Dialogflow entity matching very bad? for non dictonary enities

with Dialogflow (API.AI) I find the problem that names from vessel are not well matched when the input comes from google home.
It seems as the speech to text engine completly ignore them and just does the speech to text based on dictionary so Dialogflow cant match the resulting text all at the end.
Is it really like that or is there some way to improve?
Thanks and
Best regards
I'd recommend look at Dialogflow's training feature to identify where the speech recognition of the Google Assistant may not have worked they way you expect. In those cases, you'll see how Google's speech recognition detected words you may not have accounted for. In cases where you'd like to match these unrecognized words to a entity value, simply add them as synonyms.

Resources