Authorising a Spotify session on a headless system - spotify

Clearly by the negative score, I haven't provided enough information - sorry about that. However, perhaps add comments to explain why rather than just marking it down?
2nd attempt at a description:
I would like to be able to connect to Spotify's web API interface (https://developer.spotify.com/web-api/) on a headless embedded platform (Arm based simple MCU with WiFi). The username and password would be hardcoded into the system, probably added at setup time with the help of a mobile device (providing a temporary user interface).
I want to be able to add tracks to a playlist, which requires an authentication token. Spotify's usual flow requires the embedded platform to host their webpage login, as described here (https://developer.spotify.com/web-api/authorization-guide/).
Is this possible to authenticate without the webpage?
I have seen here (https://developer.spotify.com/technologies/spotify-ios-sdk/token-swap-refresh/) that Spotify recommend mobile apps use a remote server to handle refreshing of tokens - perhaps that's a route?
Any pointers would be appreciated.

I don't think it is bad question. I am also working on a headless player that runs on a local network which makes the authorization flow a bit awkward. So this is not much of an answer, but let me explain how it can be done.
Your headless system needs to have a web interface that can redirect to the spotify authorization url and handle the callback. The problem is that you have to register the callback url on your spotify app. Say you register http://server1/spotify/auth/callback. Now the server1 needs to be accessible from the device doing the authorization, f.ex by adding it to /etc/hosts.
The good news is that refresh can be done without user intervention, so if you store the access token the user will only need to do this one time after installing.

I know that this is really late, but for anyone having the same issue...
I am working on something similar was mentioned above so I'll share what I know. I am creating a music player that could act as another device on my Spotify (using: https://developer.spotify.com/documentation/web-playback-sdk/) account as well be controlled by my custom webpage.
I have 3 parts to this: backend server, the SDK player webpage (for me: http://localhost:8080/#/pup/player), the frontend UI webpage
(all the code snippets are a part of a class)
The only way I was able to get it running was like so:
Start the backend server and initialize puppeteer
async initPup(){
this.browser = await puppeteer.launch({
headless: false, // This is important, because spotify SDK doesn't create the device when using headless
devtools: true,
executablePath: "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe", //I also have to use Chrome and not Chromium, because Chromium is missing support for EME keySystems (yes, I've tried bruteforcing chromium versions or getting Firefox to work using createBrowserFetcher())
ignoreDefaultArgs: ['--mute-audio'],
args: ['--autoplay-policy=no-user-gesture-required']
});
this.page = (await this.browser.pages())[0]; // create one page
if(this.page == undefined){
this.page = await this.browser.newPage();
}
this.pup_ready = true;
console.log(await this.page.browser().version())
}
Open your SDK player page with puppeteer and pass the ClientID and ClientSecret of your Spotify project (https://developer.spotify.com/dashboard/):
async openPlayer(){
// const player_page = "http://localhost:8080/#/pup/player"
if(this.pup_ready){
await this.page.goto(player_page + "/?&cid=" + this.client_id + "&csec=" + this.client_secret);
}
}
On the SDK player webpage save the cid and csec URL params to LocalStorage. This should be done when no ULR parameter named "code" has been given, because that's the authorizations code which will be generated in the next step.
Something like:
var auth_code = url_params_array.find(x=>x.param.includes("code")); // try to get the auth code
var c_id = url_params_array.find(x=>x.param.includes("cid")); //get cid
var c_sec = url_params_array.find(x=>x.param.includes("csec")); //get csec
var token = undefined;
if(auth_code == undefined){ // the auth code is not defined yet and it has to be created
//SAVING CLIENT ID and CLIENT SECRET
c_id = c_id.value;
c_sec = c_sec.value;
window.localStorage.setItem("__cid", c_id)
window.localStorage.setItem("__csec", c_sec)
//GETTING THE AUTH CODE
var scope = "streaming \
user-read-email \
user-read-private"
var state = "";
var auth_query_parameters = new URLSearchParams({
response_type: "code",
client_id: c_id,
scope: scope,
redirect_uri: "http://localhost:8080/#/pup/player/",
state: state
})
window.open('https://accounts.spotify.com/authorize/?' + auth_query_parameters.toString()); // tak the puppeteer to the spotify login page
}
Login on the spotify page using your credential to create the auth token. I had to use https://www.npmjs.com/package/puppeteer-extra-plugin-stealth to bypass CAPTCHAS
async spotifyLogin(mail="<YOUR_SPOTIFY_MAIL>", pass = "<YOUR_SPOTIFY_PASSWORD") {
var p = this.page = (await this.browser.pages())[1] // get the newly opened page with the spotify
//await p.waitForNavigation({waitUntil: 'networkidle2'})
await p.focus("#login-username"); // put in the credentials
await p.keyboard.type(mail);
await p.focus("#login-password");
await p.keyboard.type(pass);
await p.$eval("#login-button", el => el.click());
(await this.browser.pages())[0].close(); // close the old SDK page
await sleep(1000) // wait to be redirected back to your SDK page
//
this.page = (await this.browser.pages())[0];
this.auth_code = await this.page.evaluate( (varName) => window.localStorage.getItem(varName), ["__auth"] ) // here is ave the auth token as a property of the class instance as well
}
Once you're redirected to SDK page again you already have cid and csec and now also the auth token.
if(auth_code == undefined)
//... (this is already in step 3)
}else{
// GETTING CID and C SECRET AGAIN
c_id = window.localStorage.getItem("__cid")
c_sec = window.localStorage.getItem("__csec")
// SAVING THE AUTH CODE
auth_code = auth_code.value;
window.localStorage.setItem("__auth", auth_code)
}
Generate a token on the backend.
async genToken():Promise<void>{
//Pretty much coppied from: https://developer.spotify.com/documentation/web-playback-sdk/guide/
var authOptions = {
url: 'https://accounts.spotify.com/api/token',
headers: {
'Authorization': 'Basic ' + (Buffer.from(this.client_id + ':' + this.client_secret).toString("base64"))
},
form: {
code: this.auth_code,
redirect_uri: "http://localhost:8080/#/pup/player/",
grant_type: 'authorization_code'
},
json: true
};
var token;
var refresh_token;
await request.post(authOptions, function(error, response, body) { // also get the refresh token
if (!error && response.statusCode === 200) {
token = body.access_token;
refresh_token = body.refresh_token;
}
});
while (!token && !refresh_token){ // wait for both of them
await sleep(100)
}
this.token = token; // save them in the class instance properties
this.refresh_token = refresh_token;
}
Lastly the puppeteer fills in a html field with the token generated in step 6 on the SDK site and presses a button to start the SDK player.
// this function gets called after the button gets pressed
async function main(){
console.log(window.localStorage.getItem("__cid")) // print out all the data
console.log(window.localStorage.getItem("__csec"))
console.log(window.localStorage.getItem("__auth"))
console.log(getToken())
const player = new Spotify.Player({ // start the sporify player
name: 'Home Spotify Player',
getOAuthToken: cb => cb(getToken())
});
player.connect().then(()=>{ // connect the player
console.log(player)
});
window.player = player;
}
function getToken(){
return document.getElementById("token_input").value;
}
You are done. Next step for me at least was communicating using another UI page to the backend puppeteer to control the SDK page (play/pause/skip etc.) This process is pretty "hacky" and not pretty at all but if you just have a little personal project it should do the job fine.
If anyone would be interested in the whole code I might even upload it somewhere, but I think this read is long-enough and overly detailed anyway.

The proper way for this would be to use the device authorization grant flow - Spotify does this already for its TV applications, but they seem to block other applications from using it. It is possible to find clientIds online that are working with this, but it is not supported by Spotify.
I explained how this works and requested that they enable it in a supported way for custom applications in this feature request - please upvote the idea there if you find it useful.
That said, it is also possible to implement your own device authorization grant flow by hosting an extra server between your device and Spotify. That server should
host an authorize and a token API endpoint
host a user-facing page where the user can enter the user code
a callback page for Spotify to redirect the user after login
I believe this is how https://github.com/antscode/MacAuth implements it:
When the device calls the authorize, the server should generate a record containing the device_code and user_code and send them back in the response. The server should keep the record for later.
When the user enters the user_code in the user-facing page, the server should redirect the user to Spotify to login, and after login the user should be redirected to the server's callback page. At that moment the server can fetch credentials from Spotify's token endpoint using the data it received in the callback. The server should store the credentials it received in the record of the user_code.
The device can poll the server using the device_code for the availability of the tokens using the token endpoint.

Related

How do I call Google Analytics Admin API (for GA4) using an OAuth2 client in node.js?

I've noticed that all the node.js code samples for Google Analytics Admin and Google Analytics Data assume a service account and either a JSON file or a GOOGLE_APPLICATION_CREDENTIALS environment variable.
e.g.
const analyticsAdmin = require('#google-analytics/admin');
async function main() {
// Instantiates a client using default credentials.
// TODO(developer): uncomment and use the following line in order to
// manually set the path to the service account JSON file instead of
// using the value from the GOOGLE_APPLICATION_CREDENTIALS environment
// variable.
// const analyticsAdminClient = new analyticsAdmin.AnalyticsAdminServiceClient(
// {keyFilename: "your_key_json_file_path"});
const analyticsAdminClient = new analyticsAdmin.AnalyticsAdminServiceClient();
const [accounts] = await analyticsAdminClient.listAccounts();
console.log('Accounts:');
accounts.forEach(account => {
console.log(account);
});
}
I am building a service which allows users to use their own account to access their own data, so using a service account is not appropriate.
I initially thought I might be able to use the google-api-node-client -- Auth would be handled by building a URL to redirect and do the oauth dance...
Using google-api-nodejs-client:
const {google} = require('googleapis');
const oauth2Client = new google.auth.OAuth2(
YOUR_CLIENT_ID,
YOUR_CLIENT_SECRET,
YOUR_REDIRECT_URL
);
// generate a url that asks permissions for Google Analytics scopes
const scopes = [
"https://www.googleapis.com/auth/analytics", // View and manage your Google Analytics data
"https://www.googleapis.com/auth/analytics.readonly", // View your Google Analytics data
];
const url = oauth2Client.generateAuthUrl({
access_type: 'offline',
scope: scopes
});
// redirect to `url` in a popup for the oauth dance
After auth, Google redirects to GET /oauthcallback?code={authorizationCode}, so we collect the code and get the token to perform subsequent OAuth2 enabled calls:
// This will provide an object with the access_token and refresh_token.
// Save these somewhere safe so they can be used at a later time.
const {tokens} = await oauth2Client.getToken(code)
oauth2Client.setCredentials(tokens);
// of course we need to handle the refresh token too
This all works fine, but is it possible to plug the OAuth2 client from the google-api-node-client code into the google-analytics-admin code?
👉 It looks like I need to somehow call analyticsAdmin.AnalyticsAdminServiceClient() with the access token I've already retrieved - but how?
The simple answer here is don't bother with the Node.js libraries for Google Analytics Admin & Google Analytics Data.
Cut out the middleman and build a very simple wrapper yourself which queries the REST APIs directly. Then you will have visibility on the whole of the process, and any errors made will be your own.
Provided you handle the refresh token correctly, this is likely all you need:
const getResponse = async (url, accessToken, options = {}) => {
const response = await fetch(url, {
...options,
headers: {
Authorization: `Bearer ${accessToken}`,
},
});
return response;
};
I use Python but the method could be similar. You should create a Credentials object based on the obtained token:
credentials = google.auth.credentials.Credentials(token=YOUR_TOKEN)
Then use it to create the client:
from google.analytics.admin import AnalyticsAdminServiceClient
client = AnalyticsAdminServiceClient(credentials=credentials)
client.list_account_summaries()

Endpoint to fetch Subreddits of a Reddit Account

I have completed the oauth flow for my third party app against a Reddit account and I've gotten the access token for the account.
Now my next issue is
How can I fetch the subreddits for an account using the access token
I can't seem to figure out the endpoint for that.
Does anyone know the endpoint for that?
Thank you
The Reddit OAuth Docs say that for the /subreddits/mine/(where) endpoint, the subreddits OAuth scope is necessary.
Once that scope is acquired for a user, you can use the following snippets of code to access the list of subscribed subreddits for the user:
View a users subreddits                                                                                
View in Fusebit
// Demonstrate using snooclient and Fusebit
const subscriptions = await redditClient.getSubscriptions().fetchAll();
// OR fetch the first page using a raw HTTP request
// - the User-Agent is necessary, don't forget it!
const access_token = redditClient.fusebit.credentials.access_token;
const httpSubs = await superagent.get(
'https://oauth.reddit.com/subreddits/mine/subscriber')
.set('Authorization', `Bearer ${access_token}`)
.set('User-Agent', 'Fusebit Integration');
const length = httpSubs.body.data.children.length;
ctx.body = {
usingSnoo: `User has ${subscriptions.length} subreddits`,
usingHttp: `The first page has ${length} subreddits`,
};
});

Access Facebook Messenger User Profile API in DialogFlow

I'm building a cross-platform chatbot in Google's DialogFlow. I'd like to access the Facebook User Profile API to learn the user's first name.
I'm struggling to find advice on how (or if) I can make this happen.
https://developers.facebook.com/docs/messenger-platform/identity/user-profile/
Has anybody here achieved this?
I did that for one of my bots yesterday, you need 2 things, first the Page Token and second is the psid which is Page scope user ID.
On dialogflow, you will receive the request block with psid as sender id. You can find it at:
agent.originalRequest.payload.data.sender.id
This psid needs to be passed to api get request at
/$psid?fields=first_name with your page Token as accessToken to get the first name in response.
You need to make a call to Facebook Graph API in order to get user's profile.
Facebook offers some SDKs for this, but their official JavaScript SDK is more intended to be on a web client, not on a server. They mention some 3rd party Node.js libraries on that link. I'm particularly using fbgraph (at the time of writing, it's the only one that seems to be "kind of" maintained).
So, you need a Page Token to make the calls. While developing, you can get one from here:
https://developers.facebook.com/apps/<your app id>/messenger/settings/
Here's some example code:
const { promisify } = require('util');
let graph = require('fbgraph'); // facebook graph library
const fbGraph = {
get: promisify(graph.get)
}
graph.setAccessToken(FACEBOOK_PAGE_TOKEN); // <--- your facebook page token
graph.setVersion("3.2");
// gets profile from facebook
// user must have initiated contact for sender id to be available
// returns: facebook profile object, if any
async function getFacebookProfile(agent) {
let ctx = agent.context.get('generic');
let fbSenderID = ctx ? ctx.parameters.facebook_sender_id : undefined;
let payload;
console.log('FACEBOOK SENDER ID: ' + fbSenderID);
if ( fbSenderID ) {
try { payload = await fbGraph.get(fbSenderID) }
catch (err) { console.warn( err ) }
}
return payload;
}
Notice you don't always have access to the sender id, and in case you do, you don't always have access to the profile. For some fields like email, you need to request special permissions. Regular fields like name and profile picture are usually available if the user is the one who initiates the conversation. More info here.
Hope it helps.
Edit
Promise instead of async:
function getFacebookProfile(agent) {
return new Promise( (resolve, reject) => {
let ctx = agent.context.get('generic');
let fbSenderID = ctx ? ctx.parameters.facebook_sender_id : undefined;
console.log('FACEBOOK SENDER ID: ' + fbSenderID);
fbGraph.get( fbSenderID )
.then( payload => {
console.log('all fine: ' + payload);
resolve( payload );
})
.catch( err => {
console.warn( err );
reject( err );
});
});
}

Spotify node web api - trouble with multiple users

I am working on an app that uses Spotify Node web API and having trouble when multiple users login into my application. I am successfully able to go through authentication flow and get the tokens and user ID after a user logs in. I am using the Authorization Code to authorize user (since I would like to get refresh tokens after expiration). However, the current problem is that getUserPlaylists function described here (FYI, if the first argument is undefined, it will return the playlists of the authenticated user) returns playlists of the most recently authenticated user instead of the user currently using the app.
Example 1: if user A logins in to the application, it will get its playlists fine. If user B logins in to the application, it also sees its own playlists. BUT, if user A refreshes the page, user A sees the playlists of the user B (instead of its own, user A playlists).
Example 2: user A logs in, user B can see user A's playlists just by going to the app/myplaylists route.
My guess is, the problem is with this section of the code
spotifyApi.setAccessToken(access_token);
spotifyApi.setRefreshToken(refresh_token);
The latest user tokens override whatever user was before it and hence the previous user is losing grants to do actions such as viewing its own playlists.
Expected behavior: user A sees own playlists after user B logs in event after refreshing the page.
Actual behavior: user A sees user B's playlists after user B logged in and user A refreshes the page.
I am aware that I could use the tokens without using the Spotify Node API
and just use the tokens to make requests and it should probably be fine, however, it would be great to still be able to use the Node API and to handle multiple users.
Here is the portion of code that most likely has problems:
export const createAuthorizeURL = (
scopes = SCOPE_LIST,
state = 'spotify-auth'
) => {
const authUrl = spotifyApi.createAuthorizeURL(scopes, state);
return {
authUrl,
...arguments
};
};
export async function authorizationCodeGrant(code) {
let params = {
clientAppURL: `${APP_CLIENT_URL || DEV_HOST}/app`
};
try {
const payload = await spotifyApi.authorizationCodeGrant(code);
const { body: { expires_in, access_token, refresh_token } } = payload;
spotifyApi.setAccessToken(access_token);
spotifyApi.setRefreshToken(refresh_token);
params['accessToken'] = access_token;
params['refreshToken'] = refresh_token;
return params;
} catch (error) {
return error;
}
return params;
}
export async function getMyPlaylists(options = {}) {
try {
// if undefined, should return currently authenticated user
return await spotifyApi.getUserPlaylists(undefined, options);
} catch (error) {
return error;
}
}
Would appreciate any help on this. I am really excited about what I am making so it would mean a LOT if someone could help me find the issue...
You're on the right track. When you set your access token and refresh token, though, you're setting it for your entire application, and all users who call your server will use it. Not ideal.
Here's a working example of the Authorization Code Flow in Node: https://glitch.com/edit/#!/spotify-authorization-code
As you can see, it uses a general instance of SpotifyWebApi to handle authentication, but it instantiates a new loggedInSpotifyApi for every request to user data, so you get the data for the user who's asking for it.
If you want to use the above example, you can just start editing to "remix" and create your own copy of the project.
Happy hacking!

GoogleActions Account not linked yet error

I'm trying to implement oauth2 authentication on my nodejs Google Assistant app developed using (DialogFlow or API.ai and google actions).
So I followed this answer. But I'm always getting "It looks like your test oauth account is not linked yet. " error. When I tried to open the url shown on the debug tab, it shows 500 broken url error.
Dialogflow fullfillment
index.js
'use strict';
const functions = require('firebase-functions'); // Cloud Functions for Firebase library
const DialogflowApp = require('actions-on-google').DialogflowApp; // Google Assistant helper library
const googleAssistantRequest = 'google'; // Constant to identify Google Assistant requests
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
console.log('Request headers: ' + JSON.stringify(request.headers));
console.log('Request body: ' + JSON.stringify(request.body));
// An action is a string used to identify what needs to be done in fulfillment
let action = request.body.result.action; // https://dialogflow.com/docs/actions-and-parameters
// Parameters are any entites that Dialogflow has extracted from the request.
const parameters = request.body.result.parameters; // https://dialogflow.com/docs/actions-and-parameters
// Contexts are objects used to track and store conversation state
const inputContexts = request.body.result.contexts; // https://dialogflow.com/docs/contexts
// Get the request source (Google Assistant, Slack, API, etc) and initialize DialogflowApp
const requestSource = (request.body.originalRequest) ? request.body.originalRequest.source : undefined;
const app = new DialogflowApp({request: request, response: response});
// Create handlers for Dialogflow actions as well as a 'default' handler
const actionHandlers = {
// The default welcome intent has been matched, welcome the user (https://dialogflow.com/docs/events#default_welcome_intent)
'input.welcome': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
//+app.getUser().authToken
if (requestSource === googleAssistantRequest) {
sendGoogleResponse('Hello, Welcome to my Dialogflow agent!'); // Send simple response to user
} else {
sendResponse('Hello, Welcome to my Dialogflow agent!'); // Send simple response to user
}
},
// The default fallback intent has been matched, try to recover (https://dialogflow.com/docs/intents#fallback_intents)
'input.unknown': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
if (requestSource === googleAssistantRequest) {
sendGoogleResponse('I\'m having trouble, can you try that again?'); // Send simple response to user
} else {
sendResponse('I\'m having trouble, can you try that again?'); // Send simple response to user
}
},
// Default handler for unknown or undefined actions
'default': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
if (requestSource === googleAssistantRequest) {
let responseToUser = {
//googleRichResponse: googleRichResponse, // Optional, uncomment to enable
//googleOutputContexts: ['weather', 2, { ['city']: 'rome' }], // Optional, uncomment to enable
speech: 'This message is from Dialogflow\'s Cloud Functions for Firebase editor!', // spoken response
displayText: 'This is from Dialogflow\'s Cloud Functions for Firebase editor! :-)' // displayed response
};
sendGoogleResponse(responseToUser);
} else {
let responseToUser = {
//richResponses: richResponses, // Optional, uncomment to enable
//outputContexts: [{'name': 'weather', 'lifespan': 2, 'parameters': {'city': 'Rome'}}], // Optional, uncomment to enable
speech: 'This message is from Dialogflow\'s Cloud Functions for Firebase editor!', // spoken response
displayText: 'This is from Dialogflow\'s Cloud Functions for Firebase editor! :-)' // displayed response
};
sendResponse(responseToUser);
}
}
};
// If undefined or unknown action use the default handler
if (!actionHandlers[action]) {
action = 'default';
}
// Run the proper handler function to handle the request from Dialogflow
actionHandlers[action]();
// Function to send correctly formatted Google Assistant responses to Dialogflow which are then sent to the user
function sendGoogleResponse (responseToUser) {
if (typeof responseToUser === 'string') {
app.ask(responseToUser); // Google Assistant response
} else {
// If speech or displayText is defined use it to respond
let googleResponse = app.buildRichResponse().addSimpleResponse({
speech: responseToUser.speech || responseToUser.displayText,
displayText: responseToUser.displayText || responseToUser.speech
});
// Optional: Overwrite previous response with rich response
if (responseToUser.googleRichResponse) {
googleResponse = responseToUser.googleRichResponse;
}
// Optional: add contexts (https://dialogflow.com/docs/contexts)
if (responseToUser.googleOutputContexts) {
app.setContext(...responseToUser.googleOutputContexts);
}
app.ask(googleResponse); // Send response to Dialogflow and Google Assistant
}
}
// Function to send correctly formatted responses to Dialogflow which are then sent to the user
function sendResponse (responseToUser) {
// if the response is a string send it as a response to the user
if (typeof responseToUser === 'string') {
let responseJson = {};
responseJson.speech = responseToUser; // spoken response
responseJson.displayText = responseToUser; // displayed response
response.json(responseJson); // Send response to Dialogflow
} else {
// If the response to the user includes rich responses or contexts send them to Dialogflow
let responseJson = {};
// If speech or displayText is defined, use it to respond (if one isn't defined use the other's value)
responseJson.speech = responseToUser.speech || responseToUser.displayText;
responseJson.displayText = responseToUser.displayText || responseToUser.speech;
// Optional: add rich messages for integrations (https://dialogflow.com/docs/rich-messages)
responseJson.data = responseToUser.richResponses;
// Optional: add contexts (https://dialogflow.com/docs/contexts)
responseJson.contextOut = responseToUser.outputContexts;
response.json(responseJson); // Send response to Dialogflow
}
}
});
// Construct rich response for Google Assistant
const app = new DialogflowApp();
const googleRichResponse = app.buildRichResponse()
.addSimpleResponse('This is the first simple response for Google Assistant')
.addSuggestions(
['Suggestion Chip', 'Another Suggestion Chip'])
// Create a basic card and add it to the rich response
.addBasicCard(app.buildBasicCard(`This is a basic card. Text in a
basic card can include "quotes" and most other unicode characters
including emoji 📱. Basic cards also support some markdown
formatting like *emphasis* or _italics_, **strong** or __bold__,
and ***bold itallic*** or ___strong emphasis___ as well as other things
like line \nbreaks`) // Note the two spaces before '\n' required for a
// line break to be rendered in the card
.setSubtitle('This is a subtitle')
.setTitle('Title: this is a title')
.addButton('This is a button', 'https://assistant.google.com/')
.setImage('https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'Image alternate text'))
.addSimpleResponse({ speech: 'This is another simple response',
displayText: 'This is the another simple response 💁' });
// Rich responses for both Slack and Facebook
const richResponses = {
'slack': {
'text': 'This is a text response for Slack.',
'attachments': [
{
'title': 'Title: this is a title',
'title_link': 'https://assistant.google.com/',
'text': 'This is an attachment. Text in attachments can include \'quotes\' and most other unicode characters including emoji 📱. Attachments also upport line\nbreaks.',
'image_url': 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'fallback': 'This is a fallback.'
}
]
},
'facebook': {
'attachment': {
'type': 'template',
'payload': {
'template_type': 'generic',
'elements': [
{
'title': 'Title: this is a title',
'image_url': 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'subtitle': 'This is a subtitle',
'default_action': {
'type': 'web_url',
'url': 'https://assistant.google.com/'
},
'buttons': [
{
'type': 'web_url',
'url': 'https://assistant.google.com/',
'title': 'This is a button'
}
]
}
]
}
}
}
};
Actually I deployed the code exists in the dialog flow inline editor. But don't know how to implement an oauth endpoint, whether it should be a separate cloud function or it has to be included within the existsing one. And also I am so confused with how oauth authorization code flow will actually work.. Let's assume we are on the Assistant app, once the user say "talk to foo app", does it automatically opens a web browser for oauth code exchange process?
The answer you referenced had an update posted on October 25th indicating they had taken action to prevent you from entering in a google.com endpoint as your auth provider for Account Linking. It seems possible that they may have taken other actions to prevent using Google's auth servers in this way.
If you're using your own auth server, the error 500 would indicate an error on your oauth server, and you should check your oauth server for errors.
Update to answer some of your other questions.
But don't know how to implement an oauth endpoint
Google provides guidance (but not code) on what you need to do for a minimal OAuth service, either using the Implicit Flow or the Authorization Code Flow, and how to test it.
whether it should be a separate cloud function or it has to be included within the existing one
It should be separate - it is even arguable that it must be separate. In both the Implicit Flow and the Authorization Code Flow, you need to provide a URL endpoint where users will be redirected to log into your service. For the Authorization Code Flow, you'll also need an additional webhook that the Assistant will use to exchange tokens.
The function behind these needs to be very very different than what you're doing for the Dialogflow webhook. While someone could probably make a single function that handles all of the different tasks - there is no need to. You'll be providing the OAuth URLs separately.
However, your Dialogflow webhook does have some relationship with your OAuth server. In particular, the tokens that the OAuth server hands to the Assistant will be handed back to the Dialogflow webhook, so Dialogflow needs some way to get the user's information based on that token. There are many ways to do this, but to list just a few:
The token could be a JWT and contain the user information as claims in the body. The Dialogflow webhook should use the public key to verify the token is valid and needs to know the format of the claims.
The OAuth server and the Dialogflow webhook could use a shared account database, and the OAuth server store the token as a key to the user account and delete expired keys. The Dialogflow webhook could then use the token it gets as a key to look up the user.
The OAuth server might have a(nother) webhook where Dialogflow could request user information, passing the key as an Authorization header and getting a reply. (This is what Google does, for example.)
The exact solutions depends on your needs and what resources you have available to you.
And also I am so confused with how oauth authorization code flow will actually work.. Let's assume we are on the Assistant app, once the user say "talk to foo app", does it automatically opens a web browser for oauth code exchange process?
Broadly speaking - yes. The details vary (and can change), but don't get too fixated on the details.
If you're using the Assistant on a speaker, you'll be prompted to open the Home app which should be showing a card saying what Action wants permission. Clicking on the card will open a browser or webview to the Actions website to begin the flow.
If you're using the Assistant on a mobile device, it prompts you directly and then opens a browser or webview to the Actions website to begin the flow.
The auth flow basically involves:
Having the user authenticate themselves, if necessary.
Having the user authorize the Assistant to access your resources on the user's behalf.
It then redirects to Google's servers with a one-time code.
Google's servers then take the code... and close the window. That's the extent of what the user's see.
Behind the scenes, Google takes this code and, since you're using the Authorization Code Flow, exchanges it for an auth token and a refresh token at the token exchange URL.
Then, whenever the user uses your Action, it will send an auth token along with the rest of the request to your server.
Plz suggest the necessary package for OAuth2 configuration
That I can't do. For starters - it completely depends on your other resources and requirements. (And this is why StackOverflow doesn't like people asking for suggestions like this.)
There are packages out there (you can search for them) that let you setup an OAuth2 server. I'm sure someone out there provides OAuth-as-a-service, although I don't know any offhand. Finally, as noted above, you can write a minimal OAuth2 server using the guidance from Google.
Trying to create a proxy for Google's OAuth is... probably possible... not as easy as it first seems... likely not as secure as anyone would be happy with... and possibly (but not necessarily, IANAL) a violation of Google's Terms of Service.
can't we store the user's email address by this approach?
Well, you can store whatever you want in the user's account. But this is the user's account for your Action.
You can, for example, access Google APIs on behalf of your user to get their email address or whatever else they have authorized you to do with Google. The user account that you have will likely store the OAuth tokens that you use to access Google's server. But you should logically think of that as separate from the code that the Assistant uses to access your server.
My implementation of a minimal oauth2 server(works for the implicit flow but doesn't store the user session).
taken from https://developers.google.com/identity/protocols/OAuth2UserAgent.
function oauth2SignIn() {
// Google's OAuth 2.0 endpoint for requesting an access token
var oauth2Endpoint = 'https://accounts.google.com/o/oauth2/v2/auth';
// Create element to open OAuth 2.0 endpoint in new window.
var form = document.createElement('form');
form.setAttribute('method', 'GET'); // Send as a GET request.
form.setAttribute('action', oauth2Endpoint);
//Get the state and redirect_uri parameters from the request
var searchParams = new URLSearchParams(window.location.search);
var state = searchParams.get("state");
var redirect_uri = searchParams.get("redirect_uri");
//var client_id = searchParams.get("client_id");
// Parameters to pass to OAuth 2.0 endpoint.
var params = {
'client_id': YOUR_CLIENT_ID,
'redirect_uri': redirect_uri,
'scope': 'email',
'state': state,
'response_type': 'token',
'include_granted_scopes': 'true'
};
// Add form parameters as hidden input values.
for (var p in params) {
var input = document.createElement('input');
input.setAttribute('type', 'hidden');
input.setAttribute('name', p);
input.setAttribute('value', params[p]);
form.appendChild(input);
}
// Add form to page and submit it to open the OAuth 2.0 endpoint.
document.body.appendChild(form);
form.submit();
}
This implementation isn't very secure but it's the only code I've gotten to work as OAuth server for the Assistant.
I am able to make it work after a long time. We have to enable the webhook first and we can see how to enable the webhook in the dialog flow fulfillment docs If we are going to use Google Assistant, then we have to enable the Google Assistant Integration in the integrations first. Then follow the steps mentioned below for the Account Linking in actions on google:-
Go to google cloud console -> APIsand Services -> Credentials -> OAuth 2.0 client IDs -> Web client -> Note the client ID, client secret from there -> Download JSON - from json note down the project id, auth_uri, token_uri -> Authorised Redirect URIs -> White list our app's URL -> in this URL fixed part is https://oauth-redirect.googleusercontent.com/r/ and append the project id in the URL -> Save the changes
Actions on Google -> Account linking setup 1. Grant type = Authorisation code 2. Client info 1. Fill up client id,client secrtet, auth_uri, token_uri 2. Enter the auth uri as https://www.googleapis.com/auth and token_uri as https://www.googleapis.com/token 3. Save and run 4. It will show an error while running on the google assistant, but dont worry 5. Come back to the account linking section in the assistant settings and enter auth_uri as https://accounts.google.com/o/oauth2/auth and token_uri as https://accounts.google.com/o/oauth2/token 6. Put the scopes as https://www.googleapis.com/auth/userinfo.profile and https://www.googleapis.com/auth/userinfo.email and weare good to go. 7. Save the changes.
In the hosting server(heroku)logs, we can see the access token value and through access token, we can get the details regarding the email address.
Append the access token to this link "https://www.googleapis.com/oauth2/v1/userinfo?access_token=" and we can get the required details in the resulting json page.
`accessToken = req.get("originalRequest").get("data").get("user").get("accessToken")
r = requests.get(link)
print("Email Id= " + r.json()["email"])
print("Name= " + r.json()["name"])`

Resources