I am a new user to Google Home SDK. I am developing a simple app where it takes what I said and takes some defined actions.
What I wanted to implement is, when I say "play the special song for someones-name", Google assistant will respond "here you go" followed by playing the defined song from Spotify. I can hard code the artist's name, album into the app. And I have already linked Spotify to my Google Home Assistant.
I have a couple specific questions after getting lost in reading the topics on Create conversational experiences from scratch by Google:
(1) Suppose I just need to hard code the song and album name and let Spotify play it, is there any code snippet for that purpose? I'm new to Node.js, so may be it's easier than I thought.
(2) I am developing the app using my dev account on GCP, say Account-A, it is different from the Google Account I signed in on my home device, say Account-B. How do I deploy and test the app on the home device?
Much appreciated for your help and advise.
There's no way to start up a standard Spotify session through a conversational action. If you have the media file, you could have your conversational action play a MediaResponse.
Alternatively, you may instead want to create a routine that accepts a given query and completes an action. That will allow you to start a media stream for whatever you want.
Related
I am trying to develop a custom skill which would perform the below operation:
Alexa, launch michael jackson app
Then I would provide option for user to select from the below option:
Alexa, play music on spotify(and I need to internally pass the value of artist (mj))
Alexa, play music on pandora(and I need to internally pass the value of artist (mj))
Alexa, play music on podcast(and I need to internally pass the value of artist (mj))
User can specify mj on Spotify, iMusic and Pandora etc..
Is this doable?
You cannot invoke Alexa again like 'Alexa, play music on Spotify' when one session is going on. There is one custom solution you can do that too only if other services (like Spotify) has exposed a REST API to use. If they have a REST API then what you can do is, after opening your skill (Alexa, launch Michael Jackson app) you can give options to user like below,
say 1 to play music on Spotify
say 2 play music on Pandora
say 2 play music on podcast
One user responds with numbers ( 1, 2, 3 etc.) then you can another input from the user for the artist name. Now call the corresponding API according to user input.
Please note all these logic would be possible only if another party has exposed a REST API.
Yes, this can be done in several ways. One would require that your app respond to the launch request, and also 3 intents:
"Alexa, open Michael Jackson app" would launch your app. It should respond to the launch request with something like "where would you like me to play Michael Jackson? You can say spotify, pandora, or podcast"
SpotifyIntent: "play music on Spotify" or even just "Spotify"
PandoraIntent: "play music on Pandora" or even just "Pandora"
PodcastIntent: "play music on podcast" or even just "podcast".
Your intent handlers would then need to make the REST calls to the selected service.
This could also be done using slots, but I think the above is about the simplest way to accomplish what you describe.
Is it somehow possible to call a web service which can ask things back and receives the answer?
Let me explain:
At home, I have a media center with some movies on it. It's content changes over time of course: Files get added, removed, renamed and so on.
Now I’d like to say for example “Hey Google, play wizard of oz” and then wizard of oz should played on my tv.
Since I know how to develop things in .NET, the web service running at home already exists and works fine, movies start. And I guess thanks to API.ai, I should be able connect it via the webhook function to Google Home.
But what if there are multiple results and I want to ask, which result should be picked? For example:
User says "Play Star Wars"
Google Home calls my web service, which checks my disk and finds out that there are multiple Star Wars movies.
Now, the user needs to be asked "There are multiple results. Which one would you like to see? Star Wars: A new hope, Star Wars: The empire strikes back, ..."
The user now answers "Star Wars: A new hope"
Google Home calls the web service again with that info and after success it replies "Okay, playing Star Wars: A new hope."
I haven't found out how to do that with API.ai. As I understand, API.ai calls the web service with some parameters (JSON), sends the response text received from the web service back to Google Home and then just ends.
Or did I miss something? Do you guys have any idea how I could achieve this scenario?
Or can we somehow develop our private services, like the ones listed in the Google Home app (Akinator, Dominos, CNBC, ...) or is that only possible as a partner? Would be nice actually.
Thanks in advance!
As I understand, API.ai calls the web service with some parameters
(JSON), sends the response text received from the web service back to
Google Home and then just ends.
The bot is still in control unless you send from your web service:
data: {
google: {
expect_user_response: false,
}
}
or check this box in API.AI in the intent pane
If you are using the ActionsSDKAssistant, make sure that you are using the right method. Ask vs. Tell
https://developers.google.com/actions/reference/ActionsSdkAssistant#ask
https://developers.google.com/actions/reference/ActionsSdkAssistant#tell
You need to study the API, the api.ai webhook request/response format and implement it. Take a look at this tutorial. Then, of course, you will have to poke a hole in your firewall to be able to receive the calls from Google or use ngrok or the BST proxy.
I am a developer for playmoss where users can create playlists with different music services.
We are planning on adding Spotify support to our playlists in a way similar to what bop.fm does.
Context
Taking for example this playlist (in which all songs are available on spotify, at least in Spain)…
https://bop.fm/p/o12l
…if we have the spotify client installed in our computer (tested with a Mac)
As soon as the playlist starts playing we can click the spotify icon on the top right [picture]and we will be playing the songs through spotify.
Using the bop.fm control interface we can pause, play, skip next, even skip to a point in the track with progress bar.
This is similar but even more powerful than the official spotify play button, see an example here:
http://jsfiddle.net/insonorizate/a5jf39yn/
With the play button there is previous, play, pause, next functionality but not seek.
Of course it can not be customized in any way nor called from javascript.
(in bop.fm is possible to open a debuger console and call
Bop.Player.pause()
or
Bop.Player.play()
to pause or play the track beeing played in bop.fm via spotify)
Fiddling a little with the bop.fm page there are some interesting things. Ther is an iframe in the main page poiting to:
https://embed.spotify.com/remote-control-bridge/
Viewing this iframe source we find something like this:
// Expose the OAuth Token to the Javascript
var tokenData = 'NAowChgKB1Nwb3RpZnkSABoGmAEByAEBJReQCFQSFG2Ynvz1oBKgxv2mE1XXz_1Au-cg';
// Pass the remote control to the bridge
var remoteControlBridge = new Spotify.RemoteControlBridge();
remoteControlBridge.init(tokenData);
There's no documentation for Spotify.RemoteControlBridge (0 results for "Spotify.RemoteControlBridge" on google) and there isn't any thing in the documentation of the different apis even close to controling the spotify player in a way similar to this.
Question
How can I control the spotify desktop app from a browser?
Does bop.fm have any special arrangment with spotify and they are using some "secret api"?
Are they exploiting some functionality that I fail to find?
Is it possible to replicate it?
Is it in accordance with the Spotify terms?
Thanks!
You can't control the Spotify Client or Listen to Events the Spotify Web Helper is emitting. Imagine everybody could: every website could potentially play a song without your permission or even know instantly what you are listening to. To prevent this Spotify only allows approved partners to use this feature.
As you figured out the remote-control-bridge provides this functionality. It can communicate with the Spotify Web Helper running on your system, which is secured by an OAuth and a CSRF Token. In the remote-control-bridge you even can see the allowed partners:
Spotify (who knew)
Yahoo
last.fm
coachella.com
bop.fm
sandpit.us
echonest
musixmatch
You can contact them and ask for a partnership. I'm sure they won't bite.
This is actually a little bit documentated on the website of Spotify in the developers section.
I think bop.fm does use their custom Spotify Play Button widget. That makes use of the iframe that you mentioned.
Here you can find the documentation about this functionality of Spotify. You can then modify it to your own needs using Javascript etc.
I recently just got through the beginning tutorial for creating a web app with the spotify api. https://developer.spotify.com/web-api/tutorial/. The tutorial was great for showing how to authenticate a user with oauth and log in a user.
The problem I am having is with the endpoint. I can't seem to figure out how to change the endpoint so that instead of displaying a users profile, I can see a list of a users track, better yet starred or top 10 tracks.
for a 10,000 view perspective of what I want to build is a app that would allow users to easily log in through their spotify account, take their stared or top tracks and push them to a radio that I am building with an raspberry pi.
I am new to working with the spotify api and working with api's in general so whatever advice would be awesome.
At the moment, there is no way to get the "starred"-playlist. (At least it's not documented)
I don't know what you mean with "Top 10 User tracks", since this doesn't exist as far as I know
To get a list of the account's current playlists, change the URL to:
https://api.spotify.com/v1/users/{user_id}/playlists
With this URL, you will get a list of simple-playlist-objects wrapped inside a paging-object. Now you can select one of the playlists (or loop through them) and fetch its track this way
NOTE:
If you also want to fetch private playlists, make sure you use the scope playlist_read_private
I only see Artist, Album, and Tracks lookups in the docs. I want to display what I'm currently listening to. Is there a way to do this using the API?
https://developer.spotify.com/technologies/web-api/lookup/
Spotify does not provide this at this time. You can either get it by turning on last.fm scrobbling or accessing facebook music data.
As Thomas said Spotify does not provide such a feature 'directly'.
But there are some ways to get it work.
You have to me some more sprcific what you want to do. Web/Desktop/App
I wrote a tiny console app using an external dll
"File/Link removed" - send pm for further information!
If this is what you need just message me. It's a rly tiny application just for test purposes! Because I currently develop a overlay.