I am developing an alexa skill,and testing it on alexa simulator provided by amazon, which will bring some news based on keyword which user provide upon request.
I set a keyword say 'bbc news' as invocation.
everything was working fine for last 2 days, now today suddenly the alexa simulator is sending TWO requests by itself.
upon calling invocation 'bbc news' the simulator send LaunchRequest to my server and then within 1 second it sends 'SessionEndedRequest' to my server automatically.
I don't know whats happening, I debug my php code but its working fine.
I just had a similar issue and after a few hours of changing everything it still didn't work. On AWS Lambda all tests were being successfully ran. It will sound stupid but on the testing page (alexa simulator) where it says on the top Skill testing is enabled in:, you have two options: Development / Off.
Try toggling the Off state and then select Development again. I'm 100% sure it somehow resets the Alexa's simulator current state and made it work correctly again for me.
I was facing the same issue, the basis is that if your previous request(session) was not ended correctly, maybe you have quit the skill during the conversation, or have .reprompt() at the end of your code response, its still waiting for a reply.
So when you invoke the skill again, it first terminates the previous session by sending 'SessionEndedRequest', and then sends the 'LaunchRequest' to invoke the skill.
To fix this check if at the end of your conversation flow, is there a .reprompt() at your final response. remove it. Fixed it myself this way.
const FnHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) ===
'IntentRequest' &&
Alexa.getIntentName(handlerInput.requestEnvelope) ===
'FnIntent';
},
handle(handlerInput) {
// const { attributesManager } = handlerInput;
// const sessionAttributes = attributesManager.getSessionAttributes();
// sessionAttributes.state = 'ENDED';
return handlerInput.responseBuilder
.speak(`Welcome, start by saying the name!`)
.reprompt(`Please say a name to start!`)// Remove this
.getResponse();
},
};
Related
I have a Bot I have built with MS BotFramework, hosted on Azure. The Bot is built to start the convo with a Welcome Message. When I test the bot through emulator, or on Azure test webchat, the bot will initiate the conversation as expected with the welcome message.
However, in my chat client using BotFramework-DirectLineJS, it isn't until I send a message that the bot will respond with the Welcome message (along with a response to the message the user just sent).
My expectation is that when I create a new instance of DirectLine and subscribe to its activities, this Welcome message would come through. However, that doesn't seem to be happening.
Am I missing something to get this functionality working?
Given this is working for you on "Test in Webchat", I'm assuming your if condition isn't the issue, but check if it is if (member.id === context.activity.recipient.id) { (instead of !==). The default on the template is !== but that doesn't work for me outside of emulator. With === it works both in emulator and other deployed channels.
However, depending on your use cases you may want to have a completely different welcome message for Directline sessions. This is what I do. In my onMembersAdded handler I actually get channelId from the activity via const { channelId, membersAdded } = context.activity;. Then I check that channelId != 'directline' before proceeding.
Instead, I use the onEvent handler to look for and respond to the 'webchat/join' event from Directline. That leaves for no ambiguity in the welcome response. For a very simple example, it would look something like this:
this.onEvent(async (context, next) => {
if (context.activity.name && context.activity.name === 'webchat/join') {
await context.sendActivity('Welcome to the Directline channel bot!');
}
await this.userState.saveChanges(context);
await this.conversationState.saveChanges(context);
})
You'll still want to have something in your onMembersAdded for non-directline channel welcome messages if you use this approach.
I'm using #google-cloud/logging to log some stuff out of my express app over on Cloud Run.
Something like this:
routeHandler.ts
import { Logging } from "#google-cloud/logging";
const logging = new Logging({ projectId: process.env.PROJECT_ID });
const logName = LOG_NAME;
const log = logging.log(logName);
const resource = {
type: "cloud_run_revision",
labels: { ... }
};
export const routeHandler: RequestHandler = (req,res,next) => {
try {
// EXAMPLE: LOG A WARNING
const metadata = { resource, severity: "WARNING" };
const entry = log.entry(metadata,"SOME WARNING MSG");
await log.write(entry);
return res.sendStatus(200);
}
catch(err) {
// EXAMPLE: LOG AN ERROR
const metadata = { resource, severity: "ERROR" };
const entry = log.entry(metadata,"SOME ERROR MSG");
await log.write(entry);
return res.sendStatus(500);
}
};
You can see that the log.write(entry) is asynchronous. So, in theory, it would be recommended to await for it. But here is what the documentation from #google-cloud/logging says:
Doc link
And I got no problem with that. In my real case, even if the log.write() fails, it is inside a try-catch and any errors will be handled just fine.
My problem is that it kind of conflicts with the Cloud Run documentation:
Doc link
Note: If I don't wait for the log.write() call, I'll end the request cycle by responding to the request
And Cloud Run does behave like that. A couple weeks back, I tried to respond immediately to the request and fire some long background job. And the process kind of halted for a while, and I think it restarted once it got another request. Completely unpredictable. And when I ran this test I'm mentioning here, I even had a MIN_INSTANCE=1 set on my cloud run service container. Even that didn't allow my background job to run smoothly. Therefore, I don't think it's fine to leave the process doing background stuff when I've finished handling a request (by doing the "fire and forget" approach).
So, what should I do here?
Posting this answer as a Community Wiki based on #Karl-JorhanSjögren's correct assumption in the comments.
For Log calls on apps running in Cloud Run you are indeed encouraged to take a Fire and Forget approach, since you don't really need to force synchronicity on that.
As mentioned in the comments replying to your concern on the CPU being disabled after the request is fulfilled, the CPU will be throttled first so that the instance can be brought back up quickly and completely disabled after a longer period of inactivity. So firing of small logging calls that in most cases will finish within milliseconds shouldn't be a problem.
What is mentioned in the documentation is targeted at processes that run for Longer periods of time.
Using the dialogflow-fulfillment-nodejs library for connecting Dialogflow to the Zendesk web widget, I experience a very long delay (about 30 sec) only with the initial request e.g. by just entering "hello" to trigger Default Welcome intent. All subsequent requests are fulfilled immediately.
This occurs with a custom Node.js script deployed on Firebase, it does not happen when testing under localhost using the Firebase emulator.
I seem not to be able to obtain any useful logging output from Firebase in order to understand where the delay is coming from. The delay happens after calling sessionClient.detectIntent(request):
console.timeLog("process", "Dialogflow Request");
const responses = await sessionClient.detectIntent(request);
console.timeLog("process", "Dialogflow Response");//continues 30 seconds later at very first request
The only observation I have is that the request arrives this late in the onRequest function:
exports.fulfillment = functions.https.onRequest((request, response) => {
console.log("onRequest Function triggered");//happens after 30 secs
//...
})
There is no error shown in the Firebase logs, and the onRequest fulfillment function finishes successfully after a few ms.
I would be thankful for getting some hints on how to troubleshoot this issue since I am currently out of ideas.
I have this code:
bot.on('conversationUpdate', (message) => {
if (message.membersAdded) {
message.membersAdded.forEach((identity) => {
if (identity.id === message.address.bot.id) {
bot.beginDialog(message.address, 'start');
}
});
}
});
bot.dialog('start', [
(session) => {
var msg = new builder.Message(session);
msg.attachments([
new builder.HeroCard(session)
.title('test')
.buttons([{ title: 'testButton', type: 'imBack', value: 'testButton' }])
]);
builder.Prompts.choice(session, msg, ['testButton']);
},
(session, results) => {
session.send('Reached 2nd function!');
console.dir(results);
var message = results.response.entity;
session.beginDialog('anotherDialog', message);
}
]);
It works fine by using Bot Framework Emulator.
Bot Framework Emulator Result
However, It doesn't reach 2nd function in the waterfall steps by using Web Chat(Azure Console).
Test in Web Chat Result
What is the difference of behavior between Bot Framework Emulator and Web Chat?
And what should I modify in the code?
Do you have any idea?
Node.js version: 8.10.0
Bot Framework Emulator version: 4.0.15-alpha
I understand that what you want to do is have the bot start the conversation instead of waiting for the user to say something, which is a very common objective. Unfortunately this is not exactly an easy task with built-in functionality, but fortunately there is a blog post explaining how to do it. The blog post is taken from a workaround posted in a GitHub issue that's linked to in the one Fei Han linked.
The gist is that conversationUpdate events don't contain enough information to allow for bot state and so dialogs and prompts and such shouldn't be spawned from the event handler. You can get around this by generating your own event in your client-side code. Of course this probably wouldn't help you when testing in the Azure portal.
In general you should expect there to be many differences between the different channels, especially when it comes to the nature of the events produced by the channels. conversationUpdate is a particularly contentious event, and it's known to behave differently in Bot Emulator from the other channels. From the blog post (emphasis mine):
If you’re using WebChat or directline, the bot’s ConversationUpdate is
sent when the conversation is created and the user sides’
ConversationUpdate is sent when they first send a message. When
ConversationUpdate is initially sent, there isn’t enough information
in the message to construct the dialog stack. The reason that this
appears to work in the emulator, is that the emulator simulates a sort
of pseudo DirectLine, but both conversationUpdates are resolved at the
same time in the emulator, and this is not the case for how the actual
service performs.
If you want to avoid writing client code and you're sure your bot is only going to be used in channels that support the conversationUpdate event, I may have another workaround for you. Even though the blog post is clear that you shouldn't be using conversationUpdate, it may still be acceptable in cases when you just need to send a single message. You could simulate a prompt by sending a single message in your event handler and then behaving as though you're following up on that message in your root dialog. Here's a proof of concept:
bot.on('conversationUpdate', (message) => {
if (message.membersAdded) {
message.membersAdded.forEach((identity) => {
if (identity.id === message.address.bot.id) {
var msg = new builder.Message()
.address(message.address)
.attachments([
new builder.HeroCard()
.title('test')
.buttons([{ title: 'testButton', type: 'imBack', value: 'testButton' }])
]);
bot.send(msg);
}
});
}
});
bot.dialog('/', function (session) {
if (session.message.text == "testButton") {
session.send('Reached 2nd function!');
session.beginDialog('/getStarted');
} else {
builder.Prompts.choice(session, "I didn’t understand. Please choose an option from the list.", ['testButton']);
}
});
Note that this proof of concept is far from robust. Since the root dialog is likely to be accessed from many different places in a real bot, you'll probably want to put a condition in there to make sure it only responds to the intro prompt one time and you'll also probably want to spawn other dialogs.
I want to create a google docs sheet within my alexa skill, that is written in Node.js. I have the enabled the google API, I set the required scope in amazon dev portal, I actually can log into the google account (so the first few lines of the posted code seem to work), and I do not get any error messages. But the sheet is never created.
Now the main question would be whether anyone can see the problem in my code.
But I would also have an additional question I would be very interested in: since I use account linking, I can not try that code in the Alexa test simulator, but have to upload it to Alexa before running it, where I can not get any debug messages. How does one best debug in that way?
if (this.event!== undefined)
{
if (this.event.session.user.accessToken === undefined)
{
this.emit(':tellWithLinkAccountCard','to start using this skill, please use the companion app to authenticate on Google');
return;
}
}
else
{
this.emit(':tellWithLinkAccountCard','to start using this skill, please use the companion app to authenticate on Google');
return;
}
var oauth2Client = new google.auth.OAuth2('***.apps.googleusercontent.com', '***', '***');
oauth2Client.setCredentials({
access_token: this.event.session.user.accessToken,
refresh_token: this.event.session.user.refreshToken
});
var services = google.sheets('v4');
services.spreadsheets.create({
resource : {properties:{title:"MySheet"}},
auth : oauth2Client
}, function(err,response) {
if( err ) {
console.log('Error : unable to create file, ' + err);
return;
} else {
console.dir(response);
}
});
Edit: I tried just the lower part manually, and could create a spreadsheet. So the problem seems indeed to be retrieving the access token with "this.event.session.user.accessToken" .
I find it is much easier to debug issues like this using unit tests. This allows rerunning code locally. I use NPM and Mocha and it makes it easier to debug both custom and smart home skills. There is quite a bit of information available online about how to use NPM and Mocha to test Nodejs code, so I won't repeat that here. For example, refer to the Big Nerd Ranch article. It makes it a bit more complex to setup your project initially, but you'll be glad you did every time you hit a bug.
In this example, I would divide the code in half:
The first half would handle the request coming from Alexa and extract the token.
The second half would use the token to create the Google doc. I would also pass the name of the doc to create.
I would test the 2nd part first, passing in a valid token (for testing only) and a test doc name. When that is working, at least you'd know that the doc creation code was working, and any issues would have to be with the token or how you're getting it.
Once that was working, I would then create a test for the first part.
I would us a hardcoded JSON object to pass in as the 'event', with event.session.user.accesToken set to a the working test token used in the first test:
'use strict';
var token = '<valid token obtained from google account>';
let testEvent = {
'session': {
'user': {
'accessToken': token
}
}
}