Implementing isRequestFromAssistant in Node.js on actions-on-google project fulfillment - node.js

I am having trouble implementing the isRequestFromAssistant method to verify requests to my fulfillment webhook. Using Node.js, I instantiate the following variables at the start of my index.js file:
const App = require('actions-on-google').ApiAiApp;
const app = new App({ request, response });
I then use "app" with the .ask and .tell and other methods throughout my functions.
The code I see in the docs for implementing isRequestFromAssistant is:
const app = new ActionsSdkApp({request, response});
app.isRequestFromAssistant('my-project-id')
.then(() => {
app.ask('Hey there, thanks for stopping by!');
})
.catch(err => {
response.status(400).send();
});
If I leave out the first line and use my existing app variable, created with the .ApiAi method instead of the .ActionsSdkApp method, it doesn't work. If I create a new variable App1 and app1 using the .ActionsSdkApp method and change the above code to be app1.isRequestFromAssistant, it also doesn't work. I have tried other variations with no luck.
When I say it doesn't work, I mean I receive a 500 Internal Server Error when I call it. I am hosting it with NGROK currently. I am still a beginner with Node.js, although I have managed to get the other 700 lines of code working just fine, learning mostly from Google searches and reading these forums.

You have a few things going on here which, individually or separately, may be causing the problem.
First - make sure you have the most recent version of the actions-on-google library. The isRequestFromAssistant() function was added in version 1.6.0, I believe.
Second - Make sure you're creating the right kind of App instance. If you're using Dialogflow (formerly API.AI), you should be creating it with something like
const App = require('actions-on-google').DialogflowApp;
const app = new App( {request, response} );
or
const { DialogflowApp } = require('actions-on-google');
const app = new DialogflowApp( {request, response} );
(They both do the same thing, but you'll see both forms in documentation.) You should switch to DialogflowApp from ApiAiApp (which your example uses) to reflect the new name, but the old form has been retained.
If you're using the Actions SDK directly (not using Dialogflow / API.AI), then you should be using the ActionsSdkApp object, something like
const { ActionsSdkApp } = require('actions-on-google');
const app = new ActionsSdkApp({request: request, response: response});
(Again, you'll see variants on this, but they're all fundamentally the same.)
Third - Make sure you're using the right function that matches the object you're using. The isRequestFromAssistant() function is only if you are using the Actions SDK.
If you are using Dialogflow, the corresponding function is isRequestFromDialogflow(). The parameters are different, however, since it requires you to set confirmation information as part of your Dialogflow configuration.
Finally - If you're getting a 500 error, then check your logs (or the output from stderr) for the node.js server that is running. Typically there will be an error message there that points you in the right direction. If not - posting that error message as part of your StackOverflow question is always helpful.

Set the secure (randomly generated) auth header & key values in the dialogflow Fulfillment page, then in nodejs:
if (app.isRequestFromDialogflow("replace_with_key", "replace_with_value")) {
console.log("Request came from dialogflow!");
// rest of bot
} else {
console.log("Request did not come from dialogflow!");
response.status(400).send();
}
Also see: https://developers.google.com/actions/reference/nodejs/DialogflowApp#isRequestFromDialogflow

Related

how does users.watch (in gmail google api) listen for notifications?

I am confused as to how should the watch feature in the gmail API be implemented to recieve the push notificatons inside a node.js script. Should I call the method inside an infinite loop or something so that it doesn't stop listening for notifications for email once after the call is made?
Here's the sample code that I've written in node.js:
const getEmailNotification = () => {
return new Promise(async (resolve, reject) => {
try{
let auth = await authenticate();
const gmail = google.gmail({version: 'v1', auth});
await gmail.users.stop({
userId: '<email id>'
});
let watchResponse = await gmail.users.watch({
userId: '<email id>',
labelIds: ['INBOX'],
topicName: 'projects/<projectName>/topics/<topicName>'
})
return resolve(watchResponse);
} catch(err){
return reject(`Some error occurred`);
}
})
Thank you!
Summary
To receive push notifications through PUB/SUB you need to create a web-hook to receive them. What does this mean? You need a WEB application or any kind of service that exposes a URL where notifications can be received.
As stated in the Push subscription documentation:
The Pub/Sub server sends each message as an HTTPS request to the subscriber application at a pre-configured endpoint.
The endpoint acknowledges the message by returning an HTTP success status code. A non-success response indicates that the message should be resent.
Setup a channel for watch the notifications could be summarized in the following steps (the documentation you refer to indicates them):
Select/Create a project within the Google Cloud Console.
Create a new PUB/SUB topic
Create a subscription (PUSH) for that topic.
Add the necessary permissions, in this case add gmail-api-push#system.gserviceaccount.com as Pub/Sub Publisher.
Indicate what types of mail you want it to listen for via Users.watch() method (which is what you are doing in your script).
Example
I give you an example using Apps Script (it is an easy way to visualize it, but this could be achieved from any kind of WEB application, as you are using Node.js I suppose that you are familiarized with Express.js or related frameworks).
First I created a new Google Apps Script project, this will be my web-hook. Basically I want it to make a log of all HTTP/POST requests inside a Google Doc that I have previously created. For it I use the doPost() equal to app.post() in Express. If you want to know more about how Apps Script works, you can visit this link), but this is not the main topic.
Code.gs
const doPost = (e) => {
const doc = DocumentApp.openById(<DOC_ID>)
doc.getBody().appendParagraph(JSON.stringify(e, null, 2))
}
Later I made a new implementation as a Web App where I say that it is accessible by anyone, I write down the URL for later. This will be similar to deploying your Node.js application to the internet.
I select a project in the Cloud Console, as indicated in the Prerequisites of Cloud Pub/Sub.
Inside this project, I create a new topic that I call GmailAPIPush. After, click in Add Main (in the right bar of the Topics section ) and add gmail-api-push#system.gserviceaccount.com with the Pub/Sub Publisher role. This is a requirement that grants Gmail privileges to publish notification.
In the same project, I create a Subscription. I tell it to be of the Push type and add the URL of the Web App that I have previously created.
This is the most critical part and makes the difference of how you want your application to work. If you want to know which type of subscription best suits your needs (PUSH or PULL), you have a detailed documentation that will help you choose between these two types.
Finally we are left with the simplest part, configuring the Gmail account to send updates on the mailbox. I am going to do this from Apps Script, but it is exactly the same as with Node.
const watchUserGmail = () => {
const request = {
'labelIds': ['INBOX'],
'topicName': 'projects/my_project_name/topics/GmailAPIPush'
}
Gmail.Users.watch(request, 'me')
}
Once the function is executed, I send a test message, and voila, the notification appears in my document.
Returning to the case that you expose, I am going to try to explain it with a metaphor. Imagine you have a mailbox, and you are waiting for a very important letter. As you are nervous, you go every 5 minutes to check if the letter has arrived (similar to what you propose with setInterval), that makes that most of the times that you go to check your mailbox, there is nothing new. However, you train your dog to bark (push notification) every time the mailman comes, so you only go to check your mailbox when you know you have new letters.

Send request progress to client side via nodejs and express

I am using this (contentful-export) library in my express app like so
const app = require('express');
...
app.get('/export', (req, rex, next) => {
const contentfulExport = require('contentful-export');
const options = {
...
}
contentfulExport(options).then((result) => {
res.send(result);
});
})
now this does work, but the method takes a bit of time and sends status / progress messages to the node console, but I would like to keep the user updated also.. is there a way I can send the node console progress messages to the client??
This is my first time using node / express any help would be appreciated, I'm not sure if this already has an answer since im not entirely sure what to call it?
Looking of the documentation for contentful-export I don't think this is possible. The way this usually works in Node is that you have an object (contentfulExport in this case), you call a method on this object and the same object is also an EventEmitter. This way you'd get a hook to react to fired events.
// pseudo code
someLibrary.on('someEvent', (event) => { /* do something */ })
someLibrary.doLongRunningTask()
.then(/* ... */)
This is not documented for contentful-export so I assume that there is no way to hook into the log messages that are sent to the console.
Your question has another tricky angle though. In the code you shared you include a single endpoint (/export). If you would like to display updates or show some progress you'd probably need a second endpoint giving information about the progress of your long running task (which you can not access with contentful-export though).
The way this is usually handled is that you kick of a long running task via a certain HTTP endpoint and then use another endpoint that serves infos via polling or or a web socket connection.
Sorry that I can't give a proper solution but due to the limitation of contentful-export I don't think there is a clean/easy way to show progress of the exported data.
Hope that helps. :)

How to test http request on Dialogflow Fulfillment with the Inline Editor

I am testing Dialogflow Fulfillment with the Inline Editor.
What I am trying to do is a http request using que 'request' library.
Here is the code I am using:
const requesthttp = require('request');
requesthttp('https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY', { json: true }, (err, res, body) => {
if (err) { return console.log(err); }
console.log(body.url);
console.log(body.explanation);
});
But It returns me an error of not found.
I also noticed an alert on my Dialogflow with the following message:
"Billing account not configured. External network is not accessible and quotas are severely limited. Configure billing account to remove these restrictions."
So... Probably I can't test this piece of code without configuring a billing account.
My question is... Is there a url that I can use to test this code?
Or the only way for me to test this code is configuring a billing account and paying for it?
Thanks in advance
There are a number of approaches to testing your code.
If you want to continue to use Dialogflow's Inline Editor, you will need to setup Firebase to use a payment plan. However, the Blaze plan is "pay as you go" after a basic level of use. This level of use should be sufficient to cover most testing (and even very light production) uses of the service without imposing a charge. Once your Action has been approved, you're able to receive credits for the Google Cloud Platform, which can be applied to this use in case you go over the minimum level.
You can also use Firebase Cloud Functions, which the Inline Editor is based on, and your own local editor. One advantage of this is that you can serve the function locally, which has many of the same features as deploying it, but doesn't have the URL restriction (it is your own machine, after all). You can use a tool such as ngrok to create a secure tunnel to your your machine during testing. Once you have tested, you can deploy this to Firebase with a paid plan.
You can, of course, choose to use any other hosting method you wish. Google and Dialogflow allow you to run your fulfillment webhook on any server, as long as that server can provide an HTTPS connection using a valid, non-self-signed, certificate. If you're using node.js, you can continue to use these libraries. If you wish to use another language, you will need to be able to parse and return JSON, but otherwise you have no restrictions.
There are a lot of ways to create your own server like using NodeJS client with Express.JS which you can expose to the internet using NGROK as webhook for fulfilment.
Develop a webhook. You can use different client libraries in NodeJS (AoG Client or Dialogflow Client) or in Python (Flask-Assistant or Dialogflow Client) or can create your own just using JSON request/response with Dialogflow and Action-on-Google.
Once the webhook is ready, run it locally and expose to the internet using NGROK.
Start with following code for Actions-on-Google with Express.JS
'use strict';
const {dialogflow} = require('actions-on-google');
const express = require('express');
const bodyParser = require('body-parser');
const app = dialogflow();
app.intent('Default Welcome Intent', conv => {
conv.ask('Hi, Welcome to Assistant by Express JS ');
});
express().use(bodyParser.json(), app).listen(8080);
Since DF uses firebase cloud functions you can use https as in nodejs. But requesting domains outside of the google/firebase universe will require the paid version of firebase.
const https = require('https');
return new Promise((resolve, reject) => {
const hostname = info.hostname;
const pathname = info.pathname;
let data = '';
const request = https.get(`https://${hostname}${pathname}`, (res) => {
res.on('data', (d) => {
data += d;
});
res.on('end', resolve);
});
request.on('error', reject);
});

NodeJS can't connect to XERO

I am using xero as my accounting software. I have one requirement that part of my application need to be integrated with xero to perform automation. Using the nodejs sdk seems so easy, but the fact is i cannot connect to xero even using the simplest example. Here is the code:
const xero = require('xero-node');
const config = {
"userAgent": "Firefox",
"consumerKey": "<MY_CONSUMER_KEY>",
"consumerSecret": "<MY_CONSUMER_SECRET>",
"privateKeyPath": "./privatekey.pem"
};
const xeroClient = new xero.PrivateApplication(config);
xeroClient.core.contacts.getContacts()
.then(contacts => {
console.log(contacts);
}).catch(err => {
console.log(err);
});
The code does nothing and prints no error. Anyone ever deal with this problem?
The most likely reason is that your privatekey is invalid. If you put these lines(https://github.com/XeroAPI/xero-node/pull/169/files) into your module then it will check it first.
You could also copy a few of those lines and validate your privateKey.
At the moment the SDK swallows the exception when the key is invalid.
Also, please make sure you are running server side - not browser side.
Solved. I need to add following code:
if (config.privateKeyPath && !config.privateKey)
config.privateKey = fs.readFileSync(config.privateKeyPath);

How to run Alexa skill with the alexa-sdk on own server with Node.js without Lambda drop-in?

The Alexa skill docs will eventually allow you to send webhooks to https endpoints. However the SDK only documents lambda style alexa-sdk usage. How would one go about running Alexa applications on one's own server without anything abstracting Lambda? Is it possible to wrap the event and context objects?
You can already use your own endpoint. When you create a new skill, in the configuration tab, just choose HTTPS and provide your https endpoint. ASK will call your endpoint where you can run anything you want (tip, check ngrok.com to tunnel to your own dev machine). Regarding the event and context objects; your endpoint will receive the event object information. You don't need the context object for anything, that just lets you interact with Lambda-specific stuff (http://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html). Just make sure that you comply with the (undocumented) timeouts by ASK and you are good to go.
Here's a way to do this that requires only a small change to your Skill code:
In your main index.js entry point, instead of:
exports.handler = function (event, context) {
use something like:
exports.myAppName = function (funcEvent, res) {
Below that, add the following workaround:
var event = funcEvent.body
// since not using Lambda, create dummy context with fail and succeed functions
const context = {
fail: () => {
res.sendStatus(500);
},
succeed: data => {
res.send(data);
}
};
Install and use Google Cloud Functions Local Emulator on your laptop. When you start and deploy your function to the emulator, you will get back a Resource URL something like http://localhost:8010/my-project-id/us-central1/myAppName.
Create a tunnel with ngrok. Then take the ngrok endpoint and put it in place of localhost:8010 in the Resource URL above. Your resulting fulfillment URL will be something like: https://b0xyz04e.ngrok.io/my-project-id/us-central1/myAppName
Use the fulfillment URL (like above) under Configuration in the Alexa dev console, selecting https as the Service Endpoint Type.

Resources