Does anyone know if there is a way to trigger DialogFlow from Face Detection API?
The DialogFlow conversation process is not very user friendly since you need to say :
"Ok Google, Talk to my app"
I've seen something about implicit invocations and deep links here:
https://blog.mirabeau.nl/nl/articles/creating_friendly_conversational_flows_using_google_deep_links/61fNoQEwS7WdUqRTMdo6J2
that provides a better approach
I'm trying to do something like this
https://www.forbes.com/sites/katiebaron/2018/06/07/ambient-tech-that-actually-works-hm-launches-a-voice-activated-mirror/#49b619634463
But with Google Assistant / Dialogflow / Vision API (Face detection)
Anyone has ideas how to do this in Google?
I am afraid that using face detection to trigger Google Assistant is not possible. Google requires you to use a trigger word such as "Ok, Google Talk to my app" when you build actions. This is done due to privacy for the user and makes sure that the app cannot be triggered without the user talking to the device.
Implicit invocations and deep links are shortcuts in your conversations, but they can only be used if you trigger the assistant first by saying "Okay Google..." Thanks for reading my blog by the way :)
Related
I have a technical question about the Google Assistant - unfortunately, I couldn't find a clear answer anywhere.
At the moment our company have:
the conversational chatbot built on Dialogflow, which is constantly developed by our employees
Google Actions agent. Our developers managed to construct the connection of the Google account and our client's account on our platform using OAuth 2.0 and created the first actions that, by the exchange of tokens, allow the return of certain information from our platform to the Google Assistant and vice versa - providing certain information in the Google Assistant that are sent and saved in the customer's account on our platform.
We would like both actions on actions.google.com (2) and conversations on Dialogflow (1) to cooperate with each other in the Google Assistant. One team is working on the chatbot, and the other on advanced actions, and we would like it to stay that way.
My question is - is the absolute only way to finally publish it on the google assistant is to migrate the chatbot from Dialogflow to Actions Builder and stop using Dialogflow?
Or maybe there is a simpler solution where both these environments (of course woring on one profile / agent) cooperate with each other and it will be possible to continue working on dialogflow?
We understand the advantages of Action Builder, but Dialogflow is just good enough for our needs.
There are a few angles to how you can approach this, depending on your exact needs and limitations you may accept, but the general answer is "yes, you can do both at the same time".
First, Dialogflow ES continues to support the Actions on Google Integration. Just as your Dialogflow agent integrates with other platforms, it should still be able to integrate with Actions.
There are some caveats (and some upsides!) with this, however:
You'll be using the Actions on Google v2 platform, rather than the v3 that comes with the Action Builder (and newer SDK). If the features you need are supported on v2, then you're fine. (Account Linking is supported in v2.) But if you need some of the features in v3, then you will run into problems.
You can't have used the Action Builder on the same Cloud project, and you should start the integration from the Dialogflow side. (But once you do - you'll be able to use the Actions Console to do things such as submit it for review, etc.)
Make sure you do not "upgrade" from Dialogflow to Actions Builder. This severs the two, so you won't be able to update the Action from Dialogflow.
Another approach is that you can use Action Builder, but have it forward all (or nearly all) of the requests to Dialogflow. Under this scheme, you would have an Action Builder project that has as little as one Scene with an Intent that captures all input, sends that to a webhook you control, which sends it to your Dialogflow agent via the Dialogflow API, gets the response from your Dialogflow agent, and forwards that response through Action Builder.
This is a little more complicated, but may offer some benefits if you want to take advantage of more advanced Action concepts that may not be available using v2.
It seems Google Assistant is unable to handle certain trigger phrases in the intent. The ones I have come across are the following:
Send message to scott
chat with q
Send text to felix
It seems to work fine inside dialogflow simulator. However, it doesn't work at all in Action Console Simulator or on a real device like google home mini. On Action Console Simulator, it gives "You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices" and on a real device it gives an error "I am sorry, i cannot help you .." and exits completely and leaves the device in a funky state. It doesn't seem to trigger any fallback intent. I have tried adding input context but make no difference.
It's very easy to reproduce. Just create a demo action with an intent for the above phrases along with "communicate with penny", invoke your app and then try the above phrases after the welcome message. It will only work if you say "communicate with ..".
Is this a known issue/limitation? Is there a list of phrases that we cannot use to trigger an intent?
The Actions on Google libraries (Node.js, Java) include a limited feature-set that allow third-party developers to build actions for the Google Assistant.
Standard features available in the Google Assistant (like "text Mary 'Hello, world!'") won't be available in your action until you build that feature, using fulfillment.
Rather than looking for a list of phrases you can't use, review the documentation for invocation to see what you can use. Third party actions for the Google Assistant are invoked like:
To learn how to get started building for the Google Assistant, check out Google's codelabs at https://codelabs.developers.google.com/codelabs/actions-1/#0
If you've already reviewed Google's Actions on Google Codelabs through level three, consider updating your question to include a list of your intents and a code sample of your fulfillment, so other Stack Overflow users can understand how your project behaves.
See also, What topics can I ask about here?, and How do I ask a good question?
I’m wondering how I can create a music Player for my Google Assistant compatible devices (e.g. Google Home mini, my tablet, phone...). I’ve been researching about how I can do this, but I’ve just found things like using Dialogflow, node-js and/or Actions on Google using Google Firebase Cloud Functions. I’m new to all this, I was motivated by Spotify and Pandora and all those other services. So I also tried looking up how they do it, but I found nothing. If any of you Know how to do it, please help me.
In addition to all that, I am just a tad bit confused about the whole Dialogflow and Actions on Google integration, but that’s easier to fix than the overall question.
If this isn’t “solvable” is there a way to do it with Dialogflow Fulfillment’s?
In order to create something like Spotify or Pandora, you need to partner with Google to create a media action. These are different than the conversational actions that you can create using Actions on Google and Dialogflow.
If you want to create a conversational action with Actions on Google and Dialogflow that produce long-form audio results as part of the conversation, you will want to look into the Media response, which you can include in your replies.
i just started on a project in DialogFlow and i was wondering is it possible to link my dialogflow to a specific desktop application? And if possible, what is the solution?
For example:
By saying "launch app", it will open up the desktop application "app"
While this is certainly something that Dialogflow's APIs can help with - this isn't a feature provided by Dialogflow itself. Dialogflow's NLP runs in the cloud - there is nothing local that it can "do".
However, you can create a launcher app that does this sort of thing by opening the microphone and sending either the stream or a speech-to-text version to Dialogflow through the Detect Intent API. Dialogflow can determine an Intent that would handle this and pass that information back to your launcher, and your launcher can then locate the app and start it.
I'm not sure how practical this would be, however. Microsoft already has this feature built-in with Cortana, and Google is building the Assistant into ChromeOS which will do this as well. While I'm not aware of Apple doing this, I may just have missed an announcement that Siri does this as well. And if there isn't someone who is doing this for Linux using some local speech-to-text libraries, it sounds like the perfect opportunity to do so.
You may try and use different Dialogflow clients available on their GitHub page. Java Client 2 may be helpful to start your work. However, you will be required to write your own UI code and have to consume Dialogflow API.
From the docs it seems like SpeechResponse is the only documented type of response you can return:
https://developers.google.com/actions/reference/conversation#SpeechResponse
Is it be possible to load an image or some other type of media in the assistant conversation via API.AI or the Actions SDK? Seems like this is supported with api.ai for FB, other messengers:
https://docs.api.ai/docs/rich-messages#image
Thanks!
As of today, Google Actions SDK supports Conversation Actions, by building a better Voice UI, which is integrated with Google Home.
Even API.AI integrations with Google Actions can be checked out here, which shows currently no support for images in the response.
When they provide integrations with Google Allo, then in the messaging interface, they might start supporting images, videos etc.
That feature seems to be present now. You can look it up in the docs at https://developers.google.com/actions/assistant/responses
Note: But images would be supported only on devices with a visual output. So Google Home would obviously not be able to do it. But the devices with screen do support a card with an image.
Pro Tip: Yes you can
What you want to do is represent your (image/video) as a URL within API.AI and render the URL as a (image/video) within your app
see working example