Webhook and javascript for DialogFlow, where do I find documentation? - dialogflow-es

I'm trying to learn DialogFlow (Google assistant) and I'm not so experienced in Javascript. I'm looking at various examples like Build Actions for the Google Assistant Level 2, 3 e.t.c.
But it's like you're supposed to know already how the javscript side of things (the code deployed to Firebase, Node.js) works. I've found the API reference but it's very extensive. I just want to look up things from the Javascript code examples – say, like a description of the method .intentin the dialogFlow object and which other methods there are in this class. But I've no idea where to find things like this. When I search I end up in the wrong places.

Part of what the issue may be is that you're mixing up two different, but related, technologies. Actions on Google are the tools for building for the Google Assistant. It can use Dialogflow as its Natural Language Processing system, and most people do, but it doesn't have to. Similarly, Dialogflow supports the Google Assistant as one of its platforms, but it also supports other platforms.
Adding to this confusion is that each has their own library that is targeted for their specific needs. The codelabs that you pointed to use the "actions-on-google" library, while the documentation link you pointed to goes to the "dialogflow" library.
Documentation for Conversational Actions can be found at https://developers.google.com/assistant/conversational/overview. Under the reference page there, you'll find a link to the documentation for the library (and rather than following this link, you should go to the developers page, since this links to a specific version of the library). You may find this documentation difficult to read, since it has classes that aren't relevant to you, but are there to abstract away differences between different version of the Actions on Google and Dialogflow protocols.
That documentation, however, can be a little difficult to read. It doesn't make clear, for example, that the typical lines to setup the application
const {dialogflow} = require('actions-on-google');
const app = dialogflow({debug: true});
are creating an instance of a DialogflowApp. Or that the object that is passed to your handler and typically named "conv":
app.intent('Default Welcome Intent', (conv) => {
// Do things
});
is an instance of DialogflowConversation.

Related

Need documentation on RouterDialog, Skill State, SkillContext and SemanticAction Class

I am looking for a documentation and code example on how following classes are can be used in the code. I am building a bot using nodejs v4 sdk.
SemanticAction Class
SkillState class
SkillContext
RouterDialog
This is likely going to be closed, but I'll point you in the right direction, to the extent possible.
SemanticAction - No docs are available for this, although this kind of covers it. Here's the reference. And here's how it's used in Virtual Assistant
SkillState - No docs for this. This is just a model and does nothing more than store the state of a skill
SkillContext - No docs for this. This is context to share between Bots and Skills
RouterDialog - No docs for this. It pretty much does what the name says and routes the user through dialog flows
There are little to no docs for this for a couple of reasons:
VA is still in preview and development priority is more or less C# > TypeScript > Docs
The main docs are here, but you're asking for documentation about very specific classes, which is unlikely to exist for nearly any SDK. For the most part, you need to read through the code to see how it works.

How to make a multiple language Dialogflow webhook/fulfillment?

I have built a Dialogflow app in english. I used a NodeJs webhook to provide the answers.
Now, I want to add a new language (Spanish). Is there a way to add it without duplicating the webhook?
Is there a prebuilt Library ,like I18N, where I provide the translation and depending on the "languageCode": "en", I send the answer?
The general solution is for you to use a localization library of your choice, send it a string identifier, and send the response that it generates. There are several libraries that Google suggests, however you should be able to use whichever one works best for you. There are some issues using i18n-node if you are using asynchronous code - the problems and solutions are also discussed by the team.
There is also the multivocal library, which takes a different approach to generating responses, but has localization built-in.

Dialogflow SDK or Dialogflow REST API, which is faster in term of response time?

I have used Dialogflow for developing the app for Google Assistant. I have created intents and entities in the Dialogflow web GUI and I'm using a webhook response for further conversation.
Now I want to build a chatbot that is part of an existing Android or iOS app and use the code I already wrote for Dialogflow as part of this. What do I need to be aware of when I do so? It looks like I can use the SDK for that platform or make calls to the Dialogflow REST API. Which is faster or are there any tradeoffs? Can I use the Dialogflow NLP without going over the network?
Note: Dialogflow API V1 is deprecated and will be shut down on October 23th, 2019.
That means that the official Javascript, native Android, native iOS and Cordova clients will stop working since they all use V1. There's no word if and when these clients will be upgraded to V2.
So the best bet right now is to use the REST APIs.
There are a few things to be aware of when moving from fulfillment that was built for Actions on Google to using this to also provide responses for other platforms. Actions on Google expects the responses to be formatted slightly differently, and if you're using AoG specific characteristics (such as a SimpleResponse object or a Card object), then it might not appear for other Dialogflow integrations. So you'll need to go over your webhook code to make sure what you send back works across platforms. Your logic and the Dialogflow UI builder should pretty much remain the same - it is just your backend that might need some work.
To make the call, as you say, you can either do the REST call yourself or use the SDK built by Dialogflow. While the SDK will be slightly faster, since it is using ProtoBuffs instead of REST, the difference will likely be fairly slight in most cases. If you're planning to stream audio, you will likely need to either use the SDK or your own ProtoBuff implementation because REST doesn't handle that as well. If you're just sending text, and are more comfortable with doing REST APIs, then this is a perfectly reasonable approach.
There is no "local Dialogflow" library. All calls have to go over the network. There are other libraries that do Speech-to-Text and NLP locally if that is what you need.

Which is better to use for Google Assistant? Actions SDK or JSON request response

I have built multiple actions on Google Assistant using the JSON request and response V2 but I have heard that using Actions SDK for building actions specifically for Google Assistant should be preferred. I am confused whether to use Actions SDK or JSON request response?
For example - On this link, for every sample code there are two tabs, Node.js using Actions SDK and JSON using the JSON request response.
Which one should be preferred and in which scenarios?
Thanks!
Let's first look at what those tabs mean, and then discuss what your best approach should be.
node.js vs JSON tabs
The "node.js" tab shows what the code looks like using the actions-on-google library. For the most part, this library uses the same code if you're using either the Action SDK or using Dialogflow to implement. Where there are differences, the documentation does note how you have to handle them - this is particularly true when it comes to how you have to handle responses.
The "JSON" tab shows what the JSON would look like if you are not using the actions-on-google library and need to send JSON yourself. You might do this because you're using a language besides node.js, or you just want to know what the underlying protocol looks like.
The catch is that the JSON illustrated here is what would be used by the Action on Google JSON protocol. If you're using Dialogflow, then this JSON would be wrapped inside the payload.google field and there are a couple of other differences documented. So when generating JSON for a Dialogflow response, you should use this as a guide, but need to be aware of how it might differ.
What should you use?
What you use depends on what your needs are and what your goals are for developing.
If you are trying to develop something you intend to release, or might do so someday, and you are new to voice or bot interfaces, then you'll probably want to use Dialogflow - no matter what other choices you may make.
If you have your own Natural Language Processing (NLP) system, then you will want to use the Action SDK instead of Dialogflow. How you handle it (using the actions-on-google library or using JSON) will depend on how you need to integrate with that NLP.
If you're familiar with node.js, or want to learn it, then using the actions-on-google library is a good choice and that first tab will help you. (There are other choices. Dialogflow also has a dialogflow-fulfillment library, which is good if you want to be able to support the bot platforms it supports as well. The multivocal library has a more configuration-driven template approach to building a conversational backend that was designed to work with the Assistant and Dialogflow. Both of these illustrate how to do the same things that Google's documentation does.)
If you would rather use another language, you'll need to reference the JSON documentation because there are few complete libraries for these platforms. Your best bet would be to use Dialogflow with JSON. Dialogflow has some example JSON for how to apply the Google documentation you cite to their fulfillment protocol.
Summary
You should probably use Dialogflow.
If you can, use actions-on-google or another library.
If you have a need to use another language or want to use the JSON, be aware of the possible differences.

Any libraries that can make accessing Google APIs as a service account simpler for NodeJS?

Background: My idea is to create a primarily content-heavy website (think news articles or blog posts) written entirely in nodejs. Since creating content on Google Drive (Google Docs) in particular is very simple, what I would like to do is have Nodejs retrieve the website's content from Google Docs.
Challenge: As far as I can tell, the correct way to do this according to Google is to create a Service Account so that the application can access the files stored on Google Drive without requiring user-intervention in the form of a confirmation. Google provides three libraries--java, python, and php--for server-to-server requests. Does anyone know of anything similar already written by the Node community? I am aware of node-oauth but I've searched through it's source and haven't found anything referencing private keys, which are required for server-to-server interaction, which I'm taking to mean it's not supported. Writing one is also an option, but I'd like to avoid that if at all possible. Looking at the Google-written Java Oauth2 client library makes it pretty clear that it's not an easy task.
Thanks in advance!
This is one library I've found that looks pretty thorough and complete for creating JSON Web Tokens: JWCrypto
I know this thread is old, but in the event others arrive here looking for an answer:
Google is working on an official module to access all of their API's. Its alpha so be careful but it looks very nice- github repository

Resources