I am trying to extend my voice technologies experience after working with Amazon Alexa.
Is it possible to use my Raspberry Pi based assistant handle the custom built-in intents like:
- Hey Google, play with the dog;
- Hey Google, make me a coffee;
- Hey Google, clean the room;
As I know the google has pretty the same way of deep linking of command and in the usual way user needs to say something like:
- Hey Google, talk to my concierge play with the dog;
- Hey Google, talk to my concierge make me a coffee;
- Hey Google, talk to my concierge clean the room;
Is there a way to make direct calls to some concrete action without calling its name using Actions SDK?
Yes, if you are using the Google Assistant SDK, you can register custom device actions which will allow you to get callbacks based on certain queries that you say, including the ability to pull parameters that may have been said.
Related
I just wanted to know if we can somehow integrate alexa or google assistant onto a website I made. I don't want to make completely new skills or apps. I just wanted to know if I type something, can I get a reply from google assistant or alexa and show on website.
I think Google has some google assistant sdk but then, it's written on python and even the node-js one is dependent on python environments
So is there any chance I can do this?
No.
The only way to initiate conversation with the smart speaker is by voice. No server side activation, sorry.
Does anyone know if there is a way to trigger DialogFlow from Face Detection API?
The DialogFlow conversation process is not very user friendly since you need to say :
"Ok Google, Talk to my app"
I've seen something about implicit invocations and deep links here:
https://blog.mirabeau.nl/nl/articles/creating_friendly_conversational_flows_using_google_deep_links/61fNoQEwS7WdUqRTMdo6J2
that provides a better approach
I'm trying to do something like this
https://www.forbes.com/sites/katiebaron/2018/06/07/ambient-tech-that-actually-works-hm-launches-a-voice-activated-mirror/#49b619634463
But with Google Assistant / Dialogflow / Vision API (Face detection)
Anyone has ideas how to do this in Google?
I am afraid that using face detection to trigger Google Assistant is not possible. Google requires you to use a trigger word such as "Ok, Google Talk to my app" when you build actions. This is done due to privacy for the user and makes sure that the app cannot be triggered without the user talking to the device.
Implicit invocations and deep links are shortcuts in your conversations, but they can only be used if you trigger the assistant first by saying "Okay Google..." Thanks for reading my blog by the way :)
It seems Google Assistant is unable to handle certain trigger phrases in the intent. The ones I have come across are the following:
Send message to scott
chat with q
Send text to felix
It seems to work fine inside dialogflow simulator. However, it doesn't work at all in Action Console Simulator or on a real device like google home mini. On Action Console Simulator, it gives "You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices" and on a real device it gives an error "I am sorry, i cannot help you .." and exits completely and leaves the device in a funky state. It doesn't seem to trigger any fallback intent. I have tried adding input context but make no difference.
It's very easy to reproduce. Just create a demo action with an intent for the above phrases along with "communicate with penny", invoke your app and then try the above phrases after the welcome message. It will only work if you say "communicate with ..".
Is this a known issue/limitation? Is there a list of phrases that we cannot use to trigger an intent?
The Actions on Google libraries (Node.js, Java) include a limited feature-set that allow third-party developers to build actions for the Google Assistant.
Standard features available in the Google Assistant (like "text Mary 'Hello, world!'") won't be available in your action until you build that feature, using fulfillment.
Rather than looking for a list of phrases you can't use, review the documentation for invocation to see what you can use. Third party actions for the Google Assistant are invoked like:
To learn how to get started building for the Google Assistant, check out Google's codelabs at https://codelabs.developers.google.com/codelabs/actions-1/#0
If you've already reviewed Google's Actions on Google Codelabs through level three, consider updating your question to include a list of your intents and a code sample of your fulfillment, so other Stack Overflow users can understand how your project behaves.
See also, What topics can I ask about here?, and How do I ask a good question?
I’m wondering how I can create a music Player for my Google Assistant compatible devices (e.g. Google Home mini, my tablet, phone...). I’ve been researching about how I can do this, but I’ve just found things like using Dialogflow, node-js and/or Actions on Google using Google Firebase Cloud Functions. I’m new to all this, I was motivated by Spotify and Pandora and all those other services. So I also tried looking up how they do it, but I found nothing. If any of you Know how to do it, please help me.
In addition to all that, I am just a tad bit confused about the whole Dialogflow and Actions on Google integration, but that’s easier to fix than the overall question.
If this isn’t “solvable” is there a way to do it with Dialogflow Fulfillment’s?
In order to create something like Spotify or Pandora, you need to partner with Google to create a media action. These are different than the conversational actions that you can create using Actions on Google and Dialogflow.
If you want to create a conversational action with Actions on Google and Dialogflow that produce long-form audio results as part of the conversation, you will want to look into the Media response, which you can include in your replies.
i'm new in the chat bot programming . I would like to do exactly the command "ok google, take a picture", that the android open the camera and in 3 second take the pic. Dialogflow is a service from google, so I thinking that there are some library with some example of this, or if not, how I need to search to put this command in my action ?
PS: I'm making a location and opinion action that receive from the user the place and the opinion about the place, so i would like to ask if the user want to take a pic from the place using this, but a don't know how a search this!
Unfortunately, there is currently no library and no direct support for this type of thing. The Assistant does not give Action developers access to the camera. In fact, most of the work your Action does is on a cloud-based server, not on the device itself.
You can, in some cases, use something like the Android Link helper, but this requires the user to have installed your app on their phone, and doesn't quite do what it sounds like you want.