I am trying to make a music player app that plays music one after the other. Here is a sample conversation:
User: play Beyblade sound
Appl: Playing "Beyblade Burst: The LOUDEST Beyblade?" by "Kevo"
User: turn it down
Appl: <default fallback>
What can i do here? Is there a way i can ask google to handle that request without closing my app?
It is a known bug currently that volume controls aren't available inside a Conversational Action (either when playing a Media response or audio through SSML).
Related
I am trying to add music into my dialogflow agent. I don't want to add it from the dialogflow console and I wish to add it from the webhook. Can you please tell me how to add the music from the webhook. I am trying this code but it's not working:
app.intent('Music', (conv) => {
var speech = '<speak><audio src="soundbank://soundlibrary/ui/gameshow/amzn_ui_sfx_gameshow_countdown_loop_32s_full_01"/>Did not get the audio file<speak>';
});
Also I want to use one interrupt keyword which will stop this music, is there any predefined way or if user defined, how to interrupt the music and proceed with my other code?
Firstly, to be able to add music, it needs to be hosted on a publicly accesible https endpoint, see the docs. So make sure you can access your file even when using a private browsing mode such as incognito on chrome.
Secondly, If you choose to use SSML to play your audio, the audio will become part of the speech response. By doing this, you won't be able to create any custom interruptions or control over the music. The user can only stop the music by stopping your action or saying "Okay Google" again to interrupt your response.
If you would want to allow your users to control the music you send to them, try having a look at the media responses within the actions on google library.
When I ask help, Alexa help invoke instead not the custom help skill. If the audio player not playing eg. on the launch page I get the custom help invoke, but not in the audio player. How can I override that?
thank you.
Per the AudioPlayer documentation:
When sending a Play directive, you normally set the shouldEndSession flag in the response object to true to end the session.
So once the user has invoked the Play Directive, they are no longer interacting with your skill. The user can effect the playback of content from your skill using the built-in playback control intents, but any other interaction with your skill requires use of the normal invocation phrase - e.g. "Alexa, ask [SkillName] for help"
What about setting shouldEndSession to false?
This has the effect of expecting more user input. While this would allow the user to ask for help (or otherwise interact with you skill) immediately after starting the audio playback, it would also pause the audio playback to listen for this input.
You can't.
In Audio Player skill when skill starts to play audio, then there is no internal session management and you can only respond using AudioPlayer directives like Play Pause Next and some other directives which you can find In this link here.
I am a new user to Google Home SDK. I am developing a simple app where it takes what I said and takes some defined actions.
What I wanted to implement is, when I say "play the special song for someones-name", Google assistant will respond "here you go" followed by playing the defined song from Spotify. I can hard code the artist's name, album into the app. And I have already linked Spotify to my Google Home Assistant.
I have a couple specific questions after getting lost in reading the topics on Create conversational experiences from scratch by Google:
(1) Suppose I just need to hard code the song and album name and let Spotify play it, is there any code snippet for that purpose? I'm new to Node.js, so may be it's easier than I thought.
(2) I am developing the app using my dev account on GCP, say Account-A, it is different from the Google Account I signed in on my home device, say Account-B. How do I deploy and test the app on the home device?
Much appreciated for your help and advise.
There's no way to start up a standard Spotify session through a conversational action. If you have the media file, you could have your conversational action play a MediaResponse.
Alternatively, you may instead want to create a routine that accepts a given query and completes an action. That will allow you to start a media stream for whatever you want.
I am trying to develop a custom skill which would perform the below operation:
Alexa, launch michael jackson app
Then I would provide option for user to select from the below option:
Alexa, play music on spotify(and I need to internally pass the value of artist (mj))
Alexa, play music on pandora(and I need to internally pass the value of artist (mj))
Alexa, play music on podcast(and I need to internally pass the value of artist (mj))
User can specify mj on Spotify, iMusic and Pandora etc..
Is this doable?
You cannot invoke Alexa again like 'Alexa, play music on Spotify' when one session is going on. There is one custom solution you can do that too only if other services (like Spotify) has exposed a REST API to use. If they have a REST API then what you can do is, after opening your skill (Alexa, launch Michael Jackson app) you can give options to user like below,
say 1 to play music on Spotify
say 2 play music on Pandora
say 2 play music on podcast
One user responds with numbers ( 1, 2, 3 etc.) then you can another input from the user for the artist name. Now call the corresponding API according to user input.
Please note all these logic would be possible only if another party has exposed a REST API.
Yes, this can be done in several ways. One would require that your app respond to the launch request, and also 3 intents:
"Alexa, open Michael Jackson app" would launch your app. It should respond to the launch request with something like "where would you like me to play Michael Jackson? You can say spotify, pandora, or podcast"
SpotifyIntent: "play music on Spotify" or even just "Spotify"
PandoraIntent: "play music on Pandora" or even just "Pandora"
PodcastIntent: "play music on podcast" or even just "podcast".
Your intent handlers would then need to make the REST calls to the selected service.
This could also be done using slots, but I think the above is about the simplest way to accomplish what you describe.
I am trying to integrate Spotify Play Button into ThingLink Spotify tag so that when the iframe loads, it starts playing automatically. Is there a way to do that with some parameter?
The functionality would be similar to Soundcloud & Vimeo players here:
http://www.thinglink.com/scene/251225958915244034
Without the autoplay, user would have to click on Play twice, which wouldn't be that good of UX.
Thanks!
-Albert
There is no autoplay functionality at present.
Autoplay isn't really what the Play Button is about — it's designed so people can listen to music if they want to. We don't really want to interrupt whatever the user is listening to already (be it something in Spotify, something else, or silence) without express permission to do so first.