So, I was thinking of making a capsule for launching a specific app in the phone. Now apps can be called by their commonly spoken names. For eg. Instagram can be called "insta", Facebook can be called "fb". So the launcher capsule should properly map the correct app with its commonly spoken name. Now the catch here is that there can be many apps starting from same prefix. For eg Instagram, Instamart, Instagram Lite all start with "insta", so our capsule much define certain conditions such that it opens a specific app, maybe based on previous user search history. Or else it should display a list of apps to the user.
I have done a similar project. For now I have launched an app via the URI, check it out.
Part of it is for debugging purposes.
result-view {
match {
RetrievedItem (this) {
from-output: SearchSong (output)
}
}
message("") // override empty string to avoid displaying default result dialog. If you want to debug, you can print `#{value(this.uri)}` here instead
app-launch {
payload-uri ("#{value(this)}")
}
render{
layout{
section{
content{
paragraph{
value("URI: #{value(this)}")
}
}
}
}
}
}
app-launch can launch an app installed on device with Android deep-link URI. Please see more in https://bixbydevelopers.com/dev/docs/reference/type/result-view.app-launch
However, I'd like to share a few thoughts:
A capsule is voice assistant focused. I might be wrong, but to me Facebook, Instagram are rather common in their full names when people are speaking. The shorted version are usually in texting.
A capsule is also focused on assisting user with a specific task. By design (for security and privacy) that a capsule is isolated from the device. Currently a capsule cannot detect or check the list of apps installed on device. In the question's example, if utterance is "launch insta", capsule must decide either Instagram, Instamart, InstaXYZ before fill-in the URI. Capsule is unable to search and launch an app start with "Insta" or prompt a list of apps installed on device start with "Insta".
For common apps, such as Google maps, Facebook, Netflix, the utterance "launch [app name]" is already supported. For example, utterance "Launch facebook" already works. Assume this capsule is called "My launcher", the new utterance would be "Ask my launcher to launch fb", which would be difficult for user to adopt.
In practice, app-launch is used with a specific app at mind. For example, a workout app developer created a capsule so that users can add an entry to workout app and launch the app in one utterance.
Hmm, a final note, seems Bixby already learned fb. The utterance "launch fb" works... LOL
Related
Ok I am using the Dialogflow Essentials and there are several intents are already defined which were integrated with google assistant and these intents works fine,But now I made new intent in Dialogflow and it works fine in try now option but when I tried to integrate it with google assistant (Dialogflow ------> Integrations ------> Google Assistant ------->continue with integration ), I can not see the new intent in the List (pop up with previous intents and their check boxes ). May some one help me to know why the new intent is not visible in the list ?
The following screen shot image shows the corresponding error message
The error message reads:
The maximum number of intents is 10.
You are limited to 10 "deep link" Intents that would be used as part of the Action invocation. These enable you to say things like "Ask Super Action to Turn the lights on". Instead of just "Talk to Super Action" and then, while it is running, asking it to turn the lights on.
You are allowed many more Intents themselves - but just 10 that can be used as part of the invocation phrase.
If this is actually for controlling Smart Home devices, you may wish to look into the Smart Home integration for the Google Assistant instead. This lets people control your devices directly through commands to the Assistant ("Hey Google, Turn on the bedroom lights") instead of having to go through an Action you've written ("Hey Google, Ask Super Home to turn on the bedroom lights"). This method does not involve Dialogflow at all.
I have a smart home dialogflow webhook working from the google actions test console, but when I speak to a google home device, there is no sign that my intents are being recognized. E.g., I enter “Home temperature?” in the console, I can see it calling my webhook, executing my script, and responding with “The temperature is 72 degrees.”
But when I say: “Hey Google, Home temperature” to my google home device, it says my nest device is not registered, or something like that. I.e., it is what it would say if I did not have smart home action intents registered with google actions.
I am unable to find anything in the docs or by web searches which says what I am supposed to do to get my google assistant devices to recognize my custom intent phrases.
Does anyone have this working? The Smart Home integration is not supposed to require a lead in, like “Hey Google, Ask whoever, Home temperature”, Right? That is only for “conversation mode” integrations, correct? My understanding is that “Smart Home” mode does not require a lead-in. Please correct me if that is incorrect…
Either way, my voice requests through my Google Home are not recognized.
Please, any advice for what I am missing or how I can troubleshoot this?
Thanks!
P.S. I'm new to Stack Overflow, and I didn't find this "dialogflow" group until posting in another group. So I am reposting here. Sorry if this is redundant. I could not find how to delete the original post...
It sounds like I was wrong about the "Hey Google, talk to ..." requirement for Dialogflow.
The "Smart Home" mode does not preclude this. You cannot just say, "Hey Google, home temperature?", you have to say, "Hey Google, ask [my dialogflow app], home temperature?"
Furthermore, unless you Publish your app, the response will always say, "Alright, here's the test version of [my dialogflow app]...
Between the two, it pretty much ruins it for me... Off to the drawing board.
I am a new user to Google Home SDK. I am developing a simple app where it takes what I said and takes some defined actions.
What I wanted to implement is, when I say "play the special song for someones-name", Google assistant will respond "here you go" followed by playing the defined song from Spotify. I can hard code the artist's name, album into the app. And I have already linked Spotify to my Google Home Assistant.
I have a couple specific questions after getting lost in reading the topics on Create conversational experiences from scratch by Google:
(1) Suppose I just need to hard code the song and album name and let Spotify play it, is there any code snippet for that purpose? I'm new to Node.js, so may be it's easier than I thought.
(2) I am developing the app using my dev account on GCP, say Account-A, it is different from the Google Account I signed in on my home device, say Account-B. How do I deploy and test the app on the home device?
Much appreciated for your help and advise.
There's no way to start up a standard Spotify session through a conversational action. If you have the media file, you could have your conversational action play a MediaResponse.
Alternatively, you may instead want to create a routine that accepts a given query and completes an action. That will allow you to start a media stream for whatever you want.
I have created an action inside google, connected it to dialogflow, connected in byt fulfillment to my own home network.
Now I would like to be able to use this on my phone, raspberry, etc. But I don`t want to deploy this to the whole world (because then they can turn off my lights).
How should I do this?
You can release your action to an Alpha channel, which let's you control who can access your action, which is just yourself by default. But you can also add up to twenty accounts.
https://developers.google.com/actions/deploy/release-environments
During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.