Is it possible to use Googles WaveNet Text-To-Speech model for the Actions-On-Google integration of a Dialogflow agent? - dialogflow-es

Google Clouds Text-To-Speech API has a WaveNet model whose output in my opinion sounds way better than the standard speech. This model can be used in Dialogflow agents (Settings > Speech > Text To Speech), which results in the generated speech being included in the DetectIntentResponse. However, I can find no way to use this speech with the Actions-On-Google integration, i.e. in an actual Google Assistant app. Have I overlooked this, or is this really not possible, and if so, does anyone know when they plan to enable this?

In the Actions console, going to the Invocation page lets you select a TTS voice.
All of the voices can be demoed on the Languages & Locales page of the docs, and the vast majority of them use WaveNet voices.

Related

Triggering DialogFlow with Face detection

Does anyone know if there is a way to trigger DialogFlow from Face Detection API?
The DialogFlow conversation process is not very user friendly since you need to say :
"Ok Google, Talk to my app"
I've seen something about implicit invocations and deep links here:
https://blog.mirabeau.nl/nl/articles/creating_friendly_conversational_flows_using_google_deep_links/61fNoQEwS7WdUqRTMdo6J2
that provides a better approach
I'm trying to do something like this
https://www.forbes.com/sites/katiebaron/2018/06/07/ambient-tech-that-actually-works-hm-launches-a-voice-activated-mirror/#49b619634463
But with Google Assistant / Dialogflow / Vision API (Face detection)
Anyone has ideas how to do this in Google?
I am afraid that using face detection to trigger Google Assistant is not possible. Google requires you to use a trigger word such as "Ok, Google Talk to my app" when you build actions. This is done due to privacy for the user and makes sure that the app cannot be triggered without the user talking to the device.
Implicit invocations and deep links are shortcuts in your conversations, but they can only be used if you trigger the assistant first by saying "Okay Google..." Thanks for reading my blog by the way :)

dialogflow, google action and webdemo

Can you like with the Test Simulator have an embedded Google Action or use a Live chat style bot on a website with Google Actions responses? Are there any code labs or third party platforms that do this?
Yes, some of the 3rd party chat windows I know of:
- Smooch
- Kommunicate
You will have to use the Payload response to send specific components (such as cards, quick replies or images)
There's also this Github repository which allows you to set Google Actions replies and it will display it in chat:
https://github.com/mishushakov/dialogflow-web-v2
Or you can write your own in React or Vue.
Actions on Google is the platform for Google Assistant developers and have own library and components. You can use these features and components only on Google Assistant projects, since every platform has different features and capability. If you need to create a chatbot with these kind of features, you should check platform's docs eg. Facebook, Telegram...
If you want to create a chatbot which has some rich responses, Dialogflow has own attributes such as Card, Suggestion. So, you can build your agent and integrate Dialogflow (not Google Action).
You can check here for platform and Dialogflow's response and payload ability.

Google how can I create a music player for my Google Assistant

I’m wondering how I can create a music Player for my Google Assistant compatible devices (e.g. Google Home mini, my tablet, phone...). I’ve been researching about how I can do this, but I’ve just found things like using Dialogflow, node-js and/or Actions on Google using Google Firebase Cloud Functions. I’m new to all this, I was motivated by Spotify and Pandora and all those other services. So I also tried looking up how they do it, but I found nothing. If any of you Know how to do it, please help me.
In addition to all that, I am just a tad bit confused about the whole Dialogflow and Actions on Google integration, but that’s easier to fix than the overall question.
If this isn’t “solvable” is there a way to do it with Dialogflow Fulfillment’s?
In order to create something like Spotify or Pandora, you need to partner with Google to create a media action. These are different than the conversational actions that you can create using Actions on Google and Dialogflow.
If you want to create a conversational action with Actions on Google and Dialogflow that produce long-form audio results as part of the conversation, you will want to look into the Media response, which you can include in your replies.

Prebuilt intents and trainingphrases agents for Google Assistant Action in german or spanish

I'm working on a music player for the Google Assistant Action. Are there pre-built agents for Intents and Trainingsphases available for languages other than English?
It seems possible to upload a JSON file with intents.
Are there resource available for spanish and/or german intents?
I would suggest you to take Intents from pre-built intents, from each intent fetch the training phrase and translate them into desired language then compile intent of your own.
This process requires interacting with Dialogflow using rest API's.
This reference page will help you understand different required API's.
Also, as you said, it is possible to upload json of intents so maybe convert the translated intents into json file and upload them manually.

Possible to return an image in the Google Actions webhook response?

From the docs it seems like SpeechResponse is the only documented type of response you can return:
https://developers.google.com/actions/reference/conversation#SpeechResponse
Is it be possible to load an image or some other type of media in the assistant conversation via API.AI or the Actions SDK? Seems like this is supported with api.ai for FB, other messengers:
https://docs.api.ai/docs/rich-messages#image
Thanks!
As of today, Google Actions SDK supports Conversation Actions, by building a better Voice UI, which is integrated with Google Home.
Even API.AI integrations with Google Actions can be checked out here, which shows currently no support for images in the response.
When they provide integrations with Google Allo, then in the messaging interface, they might start supporting images, videos etc.
That feature seems to be present now. You can look it up in the docs at https://developers.google.com/actions/assistant/responses
Note: But images would be supported only on devices with a visual output. So Google Home would obviously not be able to do it. But the devices with screen do support a card with an image.
Pro Tip: Yes you can
What you want to do is represent your (image/video) as a URL within API.AI and render the URL as a (image/video) within your app
see working example

Resources