I'm developing an Alexa Skill hosted on an AWS Lambda.
Everything works fine with the skill most of the time, and there are no problems running it on the Alexa Developer Skill simulator page.
However, when running on an Echo Dot device, sometimes while processing a user response, the skill will just die/quit/crash for no reason, and no error is thrown at all.
Sorry I can't be more specific than this, I just wondered whether anyone else had encountered a similar issue, and whether there are any common "gotchas" I should be aware of.
Cheers.
I would check cloudwatch, and maybe increase the lamda timeout / memory. The other thing to consider is your internet connection, firewall etc. I had experience something similar when my device is connected to a specific firewalled configuration.
Related
I'm working on a project in the Google Cloud App Engine (node.js flexible runtime) and while I've had a pretty good experience with it thus far, I've recently ran into a problem where the engine will sometimes not respond a specific pubsub notification. Sometimes, the results will appear after a few minutes, but often times it requires me to redeploy the app and lose any messages that I had queued up prior. Interestingly enough, when I send a pubsub message through a different subscription, the engine responds to it well. The backlogged messages are then handled, but sending the problematic message again will still not work.
I'm not really sure how to solve this issue. There is no evidence that google received the message from pubsub in the logs. Additionally, waiting for this long will have negative impacts on the project overall, implying that the messages will reach the end destination given enough time.
I'm willing to provide more information to help reproduce the error.
Thanks in advance
Edit The issue has increased significantly in severity. Please see the updated post I made to see the current extent of the issue.
I have an Alexa skill that's been published and in the alexa app store for some time. Recently, the ability for users to install this skill by voice no longer works. I noticed this within the last 7 days.
Now when a user asks 'Alexa, enable shop clerk', they hear the message:
"If you'd like to enable this skill, you can do so by finding it in the Skills section of your Alexa app."
Previously (and for virtually all skills I've tried) the normal response is to install and enable the skill.
Has anyone seen this behavior or have an idea of how I can resolve this issue?
I received the following info from the Amazon Alexa team:
Thanks for your patience. To ensure the high-quality and accuracy of the voice-enablement feature for skills, we are making improvements to the feature. During this time, the feature will not apply to all skills.
Basically, they're saying "yeah, it doesn't work for your skill anymore."
I followed up with them and they acknowledge that it is a challenge for us but that they can give no information as to why this happened to us, how we can resolve it, or when (or even if) it will ever be resolved.
Might just be that I'm getting documentation fatigue (going through the rabbit hole that is the AWS docs)...but I'm not finding any turnkey NPM/examples to Authenticate Users against AWS Cognito, in Node.js/Express. I've seen quite a few examples but all either bomb out, are incomplete/vague or require duct tape and bubble gum (pollyfill, web packs, switching to React) or suggest handling all in client (rather than Node backend/routing).
I've also tried using the Cognito Application UI for login, but haven't yet seen how to parse the tokens, etc in order to validate session from screen to screen.
NOTE: i'm still early in my diving into Cognito...but wondering if anyone has come across (or built) any NPMs that might make this less of a headache.
thx.
I am working on Skill related to audio with Nodejs, but once I made some changes I found it's difficult to debug here.
I added several console.log and also tried by adding debugger but I didn't understand where to check those console.log ?
Can anyone tell me an easy way to debugging Alexa skill (I already checked console.awe.amazon but seems unrelated everything)
In case of having an Alexa hosted skill. Go to the code section. In your lower left the logs link is shown.
Only if you hosted your skill using lambda, which therefore means you have an aws account, you have the cloudwatch linking which lambdas have by default.
An important step in Alexa Skill Development is to examine the logs by enabling the CloudWatch for your skill. It logs out the sequence of events as well as console.log() that one has put in the code.
You have to provide CloudWatch access permission for the role that you supplied when setting up the Lambda Function for your Alexa Skill.
Further, if you want to debug the execution locally, you can make use of this extremely useful tool, Bespoken
EDIT: STILL NOT ANSWERED. I appreciate the advice I have received so far, but I still have not found a proper way to test the amount of resources my server is using. I decided to use GCE instead of GAE but I still want to measure the resource usage.
I have searched all over google as well as SA and can't seem to figure this one out.
I would like to deploy my (very small) node.js server to either Google App Engine or Google Compute Engine (not sure which to use yet).
I see that they charge based on how many resources you use, but how can I check this before I make my decision? Basically what I would like to do is find a way to analyse my server and see what CPU/DISK/NETWORK/RAM/Etc it uses, and then possibly make some refinements to my code to get the usage down as low as possible.
I am a hobbyist programmer and this server is just for personal stuff so I don't need anything fancy. I just want to get it hosted on google and not my home server. My real fear is that, since I am not a professional, my code might be doing some crazy background stuff repeatedly that would rack my usage up for nothing.
Quick rundown on what my server does:
Basic node.js express template that IntelliJ made me, then I added my code to sit and listen to a Firebase. When the firebase gets a message (once or twice a day maybe, text message equivalent size) the server sends a quick GCM/FCM message to a few devices. Extremely simple server, very little code. Nothing crazy.
As a little bonus for me, if you have a suggestion as to which platform I should use, I am all-ears.
If you do not need this server to run 24x7, use App Engine. It stops an instance if it is not being used for 15 minutes. The startup time for new instances depends on your code, but for Node.js instances it should not be long.
Generally speaking it is easier to run an app on App Engine than Compute Engine, but if you use a single instance and don't change code often the difference is negligible.
App Engine has a generous free quota. You may end up paying nothing until the usage gets over a certain threshold.
You can run some diagnostic tools on your existing server, but even then you will get an approximation - a server with a different combination of resources sitting on a different network may use resources differently. You may be able to get a rather accurate estimate of memory usage, though.
If this is a small app with not too many users, even a small instance should be able to handle it. There is no harm in trying - start with the smallest instance, test, go to the next instance up if tests fail. Your key concern should be to have enough memory to handle a small number of requests.
As for the number of requests your server can handle, you can configure automatic scaling. It is a default option in App Engine and can be enabled for flexible runtime. Then you can have the smallest instance (i.e. your server does not crash due to the lack of memory) running, and another instance will be added if and when that small instance is not enough.
Well, after over a month I figure I might as well answer this myself.
What I ended up doing was creating a basic instance on Computer Engine (the micro. Smallest one available) and letting it just sit there for a few weeks. I looked back at the data to see what some good baselines were and took note.
Then I took my server code and ran it on the server. I left if there for a few days, changed it, updated it, etc. Just tried to simulate the things I would be doing. Sent messages on my client app (that's what this server is doing after all is said and done) and I let this go on for a few more weeks.
The rest is history. I looked at the baseline then looked at my new memory, CPU, network and disk usage and there we go. Good to go. My free trial still isn't even over so it was a free experiment.
The good news is that my server is more 'lightweight' than I thought.