I have an action which is a simple word game and upon completing the game should exit the conversation. I want the action to support Google Assistant and speaker based devices ( mobile phone etc) so i am handling the intent in a general fashion.
const {WebhookClient} = require('dialogflow-fulfillment');
...
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
...
function answerIntent(agent) {
if (gameShouldEnd) {
agent.end("Your score is 3/5. Cheers! GoodBye!");
}
}
...
}
This results in log error MalformedResponse: 'final_response' must be set
I tried the conv api too and that results in the same error.
const {WebhookClient} = require('dialogflow-fulfillment');
...
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
...
function answerIntent(agent) {
if (gameShouldEnd) {
let conv = agent.conv();
conv.tell("Your score is 3/5. Cheers! GoodBye!");
agent.add(conv);
}
}
...
}
Please suggest how to close the Mic when the game ends and still sends a response.
Seems there is an issue with version 0.5.0 of the dialogflow-fullfillment package according to the issue logged https://github.com/dialogflow/dialogflow-fulfillment-nodejs/issues/149
I tried updating to 0.6.0 which has breaking changes which solved the current question i posted but created context related problems.
Do you have tried the close method:
conv.close("Your score is 3/5. Cheers! GoodBye!");
please check if the lifespam of your intent is 1. After that you can use the command bellow:
agent.end("bye");
Related
So recently the new version of the discord bot API came out for Node along with interactions and all that. And, they also changed some other stuff, don't know why. but they did.
I was trying to just try out the audio playing code to see how it works and maybe update some of my older bots, when I ran into the issue that it just doesn't work. I've been following the docs at https://discordjs.guide/voice/voice-connections.html#life-cycle and https://discordjs.guide/voice/audio-player.html#life-cycle but they're really just not working.
Just testing code looks like this:
const Discord = require('discord.js');
const {token} = require("./config.json");
const { join } = require("path");
const {joinVoiceChannel, createAudioPlayer, createAudioResource, AudioPlayerStatus, VoiceConnectionStatus, SubscriptionStatus, StreamType } = require("#discordjs/voice");
client.on("ready", async () => {
const connection = joinVoiceChannel({
channelId: channel.id,
guildId: channel.guild.id,
adapterCreator: channel.guild.voiceAdapterCreator,
});
const audioPlayer = createAudioPlayer();
const resource = createAudioResource(createReadStream(join(__dirname, "plswork.mp3")));
const subscription = connection.subscribe(audioPlayer);
audioPlayer.play(resource);
audioPlayer.on(AudioPlayerStatus.Playing, () => {
console.log("currently playing");
console.log("resource started:", resource.started);
});
audioPlayer.on('error', error => {
console.error(`Error: ${error.message} with resource ${error.resource.metadata.title}`);
});
audioPlayer.on(AudioPlayerStatus.AutoPaused, () => {
console.log("done?");
});
I create a connection, audioPlayer, and resource but after subscribing the connection to the audioPlayer and playing the resource no audio is played, no error is raised (in AudioPlayer.on("error"...)) and the AutoPaused status is immediately called.
By logging the resource I see that resource.playbackDuration is 0, but I don't know how to fix this as I can't find much on the internet about this topic.
From what I can tell by looking at your code you are not requiring the correct json from #discordjs/voice, you need to require at least AudioPlayerStatus and createAudioPlayer. Also, by your resource I think you're trying to create an AudioResource, so you'll need it as well. You'll need something like:
const { AudioPlayerStatus, createAudioPlayer, AudioResource, StreamType } = require('#discordjs/voice');
/* */
const audioPlayer = createAudioPlayer();
/* */
audioPlayer.play(resource);
audioPlayer.on(AudioPlayerStatus.Playing, () => {
// Do whatever you need to do when playing
});
/* */
In conclusion, I suggest you to look up the life cycle of an AudioPlayer and the creation of an AudioPlayer. Be sure, also, to create correctly your resource, here you'll find the AudioResource documentation if you'll ever need it.
I want to create a Dialogflow webhook that responds to the user slowly, so it more feels like someone is on the other end and takes a few seconds to reply.
I'm using the built-in code editor, and can create an Intent handler (see code), but I just don't know how to get it to reply slower.
const functions = require('firebase-functions');
const {WebhookClient} = require('dialogflow-fulfillment');
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
function welcome (agent) {
agent.add(`I'm replying too quickly!`);
}
function fallback (agent) {
agent.add(`I didn't understand`);
agent.add(`I'm sorry, can you try again?`);
}
// Run the proper function handler based on the matched Dialogflow intent name
let intentMap = new Map();
intentMap.set('Default Welcome Intent', welcome);
intentMap.set('Default Fallback Intent', fallback);
agent.handleRequest(intentMap);
});
Best way to handle this is to add delay in the UI code.
Keep the Dialogflow Intent as it is, and once the bot response is received on the frontend, show it with a delay.
Below is an example of how we are handling it at Kommunicate
Dialogflow response comes without any delay, then on Javascript code, we show a typing indicator animation, add some delay using Javascript before displaying it.
Needing to reply slower is rarely a desired thing, to be honest. But the easiest way to do so is to use setTimeout() and delay for a little bit. (Don't delay too long - more than 5 or 10 seconds and Dialogflow will timeout.)
The catch with using setTimeout(), however, is that the handler will need to return a Promise. So you'll need to wrap the call to setTimeout() and the agent.add() in a Promise handler. A function that does this might look something like:
function respondSlowly( agent, msg, ms ){
return new Promise( resolve => {
setTimeout( () => {
agent.add( msg );
resolve();
}, ms );
});
}
You would then call this from your handler, providing the agent, the message, and how many milliseconds to wait to reply:
function welcome( agent ){
return respondSlowly( agent, `Hi there, slowly`, 2000 ); // Wait 2 seconds to reply
}
I am currently evaluating WebViewer version 5.2.8.
I need to set some javascript function/code as an action for triggers like calculate trigger, format trigger and keystroke trigger through the WebViewer UI.
Please help me on how to configure javascript code for a form field trigger in WebViewer UI.
Thanks in advance,
Syed
Sorry for the late response!
You will have to create the UI components yourself that will take in the JavaScript code. You can do something similar to what the FormBuilder demo does with just HTML and JavaScript. However, it may be better to clone the open source UI and add your own components.
As for setting the action, I would recommend trying out version 6.0 instead as there is better support for widgets and form fields in that version. However, we are investigating a bug with the field actions that will throw an error on downloading the document. You should be able to use this code to get it working first:
docViewer.on('annotationsLoaded', () => {
const annotations = annotManager.getAnnotationsList();
annotations.forEach(annot => {
const action = new instance.Actions.JavaScript({ javascript: 'alert("Hello World!")' });
// C cor Calculate, and F for Format
annot.addAction('K', action);
});
});
Once the bug has been dealt with, you should be able to download the document properly.
Otherwise, you will have to use the full API and that may be less than ideal. It would be a bit more complicated with the full API and I would not recommend it if the above feature will be fixed soon.
Let me know if this helps or if you need more information about using the full API to accomplish this!
EDIT
Here is the code to do it with the full API! Since the full API works at a low level and very closely to the PDF specification, it does take a lot more to make it work. You do still have to update the annotations with the code I provided before which I will include again.
docViewer.on('documentLoaded', async () => {
// This part requires the full API: https://www.pdftron.com/documentation/web/guides/full-api/setup/
const doc = docViewer.getDocument();
// Get document from worker
const pdfDoc = await doc.getPDFDoc();
const pageItr = await pdfDoc.getPageIterator();
while (await pageItr.hasNext()) {
const page = await pageItr.current();
// Note: this is a PDF array, not a JS array
const annots = await page.getAnnots();
const numAnnots = await page.getNumAnnots();
for (let i = 0; i < numAnnots; i++) {
const annot = await annots.getAt(i);
const subtypeDict = await annot.findObj('Subtype');
const subtype = await subtypeDict.getName();
const actions = await annot.findObj('AA');
// Check to make sure the annot is of type Widget
if (subtype === 'Widget') {
// Create the additional actions dictionary if it does not exist
if (!actions) {
actions = await annot.putDict('AA');
}
let calculate = await actions.findObj('C');
// Create the calculate action (C) if it does not exist
if (!calculate) {
calculate = await actions.putDict('C');
await Promise.all([calculate.putName('S', 'JavaScript'), calculate.putString('JS', 'app.alert("Hello World!")')]);
}
// Repeat for keystroke (K) and format (F)
}
}
pageItr.next();
}
});
docViewer.on('annotationsLoaded', () => {
const annotations = annotManager.getAnnotationsList();
annotations.forEach(annot => {
const action = new instance.Actions.JavaScript({ javascript: 'app.alert("Hello World!")' });
// K for Keystroke, and F for Format
annot.addAction('C', action);
});
});
You can probably put them together under the documentLoaded event but once the fix is ready, you can delete the part using the full API.
I want to get the name of the current intent in the fulfillment so I can deal with different response depending on different intent i'm at. But I cannot find a function for it.
function getDateAndTime(agent) {
date = agent.parameters.date;
time = agent.parameters.time;
// Is there any function like this to help me get current intent's name?
const intent = agent.getIntent();
}
// I have two intents are calling the same function getDateAndTime()
intentMap.set('Start Booking - get date and time', getDateAndTime);
intentMap.set('Start Cancelling - get date and time', getDateAndTime);
There is nothing magical or special about using the intentMap or creating a single Intent Handler per intent. All the handleRequest() function does is look at action.intent to get the Intent name, get the handler with that name from the map, call it, and possibly dealing with the Promise that it returns.
But if you're going to violate the convention, you should have a very good reason for doing so. Having a single Intent Handler per Intent makes it very clear what code is being executed for each matched Intent, and that makes your code easier to maintain.
It looks like your reason for wanting to do this is because there is significant duplicate code between the two handlers. In your example, this is getting the date and time parameters, but it could be many more things as well.
If this is true, do what programmers have been doing for decades: push these tasks to a function that can be called from each handler. So your examples might look something like this:
function getParameters( agent ){
return {
date: agent.parameters.date,
time: agent.parameters.time
}
}
function bookingHandler( agent ){
const {date, time} = getParameters( agent );
// Then do the stuff that uses the date and time to book the appointment
// and send an appropriate reply
}
function cancelHandler( agent ){
const {date, time} = getParameters( agent );
// Similarly, cancel things and reply as appropriate
}
intentMap.set( 'Start Booking', bookingHandler );
intentMap.set( 'Cancel Booking', cancelHandler );
request.body.queryResult.intent.displayName will give the the intent name.
'use strict';
const functions = require('firebase-functions');
const {WebhookClient} = require('dialogflow-fulfillment');
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
function getDateAndTime(agent) {
// here you will get intent name
const intent = request.body.queryResult.intent.displayName;
if (intent == 'Start Booking - get date and time') {
agent.add('booking intent');
} else if (intent == 'Start Cancelling - get date and time'){
agent.add('cancelling intent');
}
}
let intentMap = new Map();
intentMap.set('Start Booking - get date and time', getDateAndTime);
intentMap.set('Start Cancelling - get date and time', getDateAndTime);
agent.handleRequest(intentMap);
});
But it would made more sense if you use two different functions in intentMap.set
you can try using "agent.intent" but it doesn't make sense to use the same function for two different intents.
I'm building the AWS Lex chat bot right now and faced some issue on the lambda function settings. According to the sample code, it used this lambda function at the end of the conversation. That's why the code was like : function close(.....)
'use strict';
// Close dialog with the customer, reporting fulfillmentState of Failed
or Fulfilled ("Thanks, your pizza will arrive in 20 minutes")
function close(sessionAttributes, fulfillmentState, message) {
return {
sessionAttributes,
dialogAction: {
type: 'Close',
fulfillmentState,
message,
},
};
}
However what I would like to do is using the DialogCodeHook instead of this FulfillmentCodeHook.
The simplest logic inside Lex is asking question 1-->get answer 1-->asking question 2-->get answer 2-->asking question 2-->get answer 3;
What i wanna do is
Ask Question 1- Response Value allowed are 1.1, 1.2
If Response Value= Value 1.1
Ask Question 2
If Response Value= Value 1.2
Ask Question 3
Ask Question 4- Value 4.1, Value 4.2
.. so on
On AWS discussion forum, an answer is like:
Yes, you can use Lambda to implement the decision tree. Lambda allows you to set a specific message and elicit a slot using 'dialogAction'.
For this specific conversation flow
if (response_value = 1.1) {
// set dialogAction.message = "Question 2"
...
// set type = ElicitSlot
...
// slotToElicit = answer2"
}
Similarly you would define conditions to ask Question 3, 4 etc.
But I am not sure where should I put this If..... at and how to use this ElicitSlot function.
Full version of the sample code for the close function is:
'use strict';
// Close dialog with the customer, reporting fulfillmentState of Failed or Fulfilled ("Thanks, your pizza will arrive in 20 minutes")
function close(sessionAttributes, fulfillmentState, message) {
return {
sessionAttributes,
dialogAction: {
type: 'Close',
fulfillmentState,
message,
},
};
}
// --------------- Events -----------------------
function dispatch(intentRequest, callback) {
console.log('request received for userId=${intentRequest.userId}, intentName=${intentRequest.currentIntent.intentName}');
const sessionAttributes = intentRequest.sessionAttributes;
const slots = intentRequest.currentIntent.slots;
const crust = slots.crust;
const size = slots.size;
const pizzaKind = slots.pizzaKind;
callback(close(sessionAttributes, 'Fulfilled',
{'contentType': 'PlainText', 'content': `Okay, I have ordered your ${size} ${pizzaKind} pizza on ${crust} crust`}));
}
// --------------- Main handler -----------------------
// Route the incoming request based on intent.
// The JSON body of the request is provided in the event slot.
exports.handler = (event, context, callback) => {
try {
dispatch(event,
(response) => {
callback(null, response);
});
} catch (err) {
callback(err);
}
};
Hope someone can help! Thank you so much!!!!!!!!!!!!!!!!!!!!!
Please check this code: https://github.com/nicholasjackson/slack-bot-lex-lambda/blob/master/src/dispatcher.js
It lists functions for all possible scenarios, including the close(...) you have, but also the ElicitSlot(...) as well that you're after.
Please note that there is an ElicitIntent dialog action type as well which is not used in the code but it could be useful in some scenarios.
Hope it helps.
Tibor