After NodeJS script has an error, it will not start again. AWS Lambda - node.js

I have a lambda function on the nodejs4.x runtime. If my script stops execution due to an error, lets say I try to get .length of an undefined object, then I can't start the script again. It's not even like the script runs and hits the same error, the script doesn't run. The lambda handler function is never called the second time.
This lambda function is the endpoint for Amazon Alexa. When I reupload the code (a zip file) then the system works again.
Is this some behavior of nodejs? Is the script ending prematurly corrupting the files so it cannot start again?
When the server hits an error I get this message Process exited before completing request
And then subsequent requests hit the timeout limit.
Important Edit
I have pinpointed the issue to NPM request. the module doesnt finish loading ie.
console.log('i see this message');
var request = require('request');
console.log('this never happens');

Couple of things that I know:
If lambda invocation fails, due to any reason, it will be invoked again (actually it will be retried at most 3 times).
However, this is only true for asynchronous invocations, there are two types of invocations.
Any external module that your lambda's code requires, must be included in the package that you deploy to the lambda, I have explained this simply in here.
You can write code that accesses a property of undefined variable, yes it will throw an exception, and if this invocation is asynchronous it will be retried 2 more times - which will fail too of course.

Since the Lambda function fails when calling require('request') I believe that the project has not been deployed correctly. request must be deployed with the Lambda function because it is not part of Node.js 4.3.2 (the current Lambda JavaScript runtime).
Make sure that:
require is added to your package.json file (e.g. by calling $ npm install require --save, see npm install for details).
You create a deployment package by zipping your project folder (including the node_modules folder).
That the deployment .zip is uploaded to your Lambda function.

So after contacting AWS through their forums, this turns out to be a bug. The container is not cleared upon an error, so the code has to be re-uploaded.
A solution is to make a cloudwatch alarm that fires another lambda function that re uploads the code automatically.
They are working on a fix.
Forum post: https://forums.aws.amazon.com/thread.jspa?threadID=238434&tstart=0

In fact there are many cases when Lambda becomes unresponsive, e.g.:
Parsing not valid json:
exports.handler = function(event, context, callback)
{
var nonValidJson = "Not even Json";
var jsonParse = JSON.parse(nonValidJson);
Accessing property of undefined variable:
exports.handler = function(event, context, callback)
{
var emptyObject = {};
var value = emptyObject.Item.Key;
Not closing mySql connection after accessing RDS leads to Lambda timeout and then it becomes non-responsive.
Making a lambda that reuploads the code can take a portion of time.
After some tests it's revealed that in fact Lambda tries to restart (reload the container?), there is just not enough time. If you set the timeout to be 10s, after ~4s of execution time Lambda starts working, and then in next runs comes to behave normally. I've also tried playing with setting:
context.callbackWaitsForEmptyEventLoop = false;
and putting all 'require' blocks inside handler, nothing really worked. So the good way to prevent Lambda becoming dead is setting bigger timeout, 10s should be more than enough as a workaround protection against this bug.

Related

lambdas fail to log to CloudWatch

Situation - I have a lambda that:
is built with Node.js v8
has console.log() statements
is triggered by SQS events
works properly (the downstream system receives all messages, AWS X-Ray can see those executions)
Problem:
this lambda does not log anything!
But if the same lambda is called manually (using "Test" button) - all logging statements are visible in CloudWatch.
My lambda is based on this tutorial: https://www.jeremydaly.com/serverless-consumers-with-lambda-and-sqs-triggers/
A very similar situation occurs if the lambda was called from within another lambda (recursion). Only the first lambda logs stuff (started manually), but every next lambda in the recursion chain does not log anything.
an example can be found here:
https://theburningmonk.com/2016/04/aws-lambda-use-recursive-function-to-process-sqs-messages-part-1/
any idea how to tackle this problem will be highly appreciated.

Thawing Lambda functions doesn't decrease latency

I'm using serverless-warmup-plugin to run a cron that invokes a Lambda function every 10 minutes. The code for the Lambda function looks like this:
exports.lambda = (event, context, callback) => {
if (event.source === 'serverless-plugin-warmup') {
console.log('Thawing lambda...')
callback(null, 'Lambda is warm!')
} else {
// ... logic for the lambda function
}
}
This works on paper but in practice the cron doesn't keep the Lambda function warm even though it successfully invokes it every 10 minutes.
When the Lambda is invoked via a different event source (other than the cron) it takes around 2-3 seconds for the code to execute. Once it's executed this way, Lambda actually warms up and starts responding under 400ms. And it stays warm for a while.
What am I missing here?
As the official documentation states:
Note
When you write your Lambda function code, do not assume that AWS Lambda always reuses the container because AWS Lambda may choose not to reuse the container. Depending on various other factors, AWS Lambda may simply create a new container instead of reusing an existing container.
It seems like a "bad architecture design" to try to keep a Lambda Container up, but, apparently it's a normal scenario your warmed container not being used when a different event source triggers a new container.

Azure-Function Deployment kill my running process..can i avoid?

I have a few Azure functions sharing same the code. So I created a batch file for publishing my libs. It is a simple bat file. For each of my azure functions, it connects to a host and uses robocopy to synchronize folders.
However, each time I publish, current running functions are dropped. I want to avoid that. Is there a way to let a running function naturally terminate its work?
I think its possible because when I publish, I'm not re-write real running dll, but I copy file in <azure-function-url>/site/wwwroot folder.
NOTE:
The function calls an async method without await. The async method does not completed the work when source change. (Im not focus on this problem, thanks Matt for the comment..open my eyes)
The functions runtime is designed to allow functions to gracefully exit in the event of host restarts, see here.
Not awaiting your async calls is an antipattern in functions, as we won't be able to track your function execution. We use the returned Task to determine when your function has finished. If you do not return a Task, we assume your function has completed when it returns.
In your case, that means we will kill the host on restarts while your orphaned asynchronous calls are running. If you fail to await async calls, we also don't guarantee successful:
Logging
Output bindings
Exception handling
Do: static async Task Run(...){ await asyncMethod(); }
Don't: static void Run(...){ asyncMethod(); }

Phantomjscloud not working with aws lambda nodejs

Feeling painful to create a AWS lambda function, I was able to deploy the same micro service easily with Google Cloud Function, when I ported the same service from GCF to lambda with some changes in handling function like context in aws lambda and deployed the .zip of the project. It started throwing an unknown error shown below. The lambda function works well in local environment,
{
"errorMessage": "callback called with Error argument, but there was a problem while retrieving one or more of its message, name, and stack"
}
and the logs showing a syntax error in the parent script where the code begins, but there is no syntax error in the index.js which I have confirmed by running node index.js, any way I have attached the code snippet of index.js at the bottom
START RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d Version:
$LATEST Syntax error in module 'index': SyntaxError
END RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d
I started to narrow down the piece of software that is causing the problem, I have removed all the dependencies and started including one by one and ran the lambda each time uploading the zip and finally found the culprit that caused the problem, it is phantomjscloud that is causing the problem.
when I include const phantomJsCloud = require('phantomjscloud') it is throwing out that error, even my npm_modules have phantomjscloud module included. are there any known glitches between aws lambda and phanthomjscloud, no clue how to solve this, feel free to ask any information if you feel that I have missed any thing.
Here the code that works well without including const phantomJsCloud = require('phantomjscloud')
global.async = require('async');
global.ImageHelpers = require('./services/ImageHelpers');
global.SimpleStorage = require('./services/SimpleStorage');
global.uuid = require('uuid');
global.path = require('path');
const phantomJsCloud = require('phantomjscloud')
const aadhaarController = require('./controllers/Aadhaar')
exports.handler = (event, context) => {
// TODO implement
aadhaarController.generateAadhaarCard(event,context);
};
Error message from aws lambda function when phantomjscloud is included:
AWS uses node version 4.3 for which phantomjscloud was not supported, that is the reason it worked only with google cloud function which have a run time environment of 6.9.2, now it is fixed by the author, by any chance if you are seeing this answer you might be using some other version of node which is not supported by phantomjscloud, raising a github issue solved the problem

Random 'ECONNABORTED' error when using sendFile in Express/Node

I have set a node server with Express middleware. I get the ECONNABORTED error randomly on some files when loading an HTML file which triggers about 10 other loads (js, css, etc.). The exact error is:
{ [Error: Request aborted] code: 'ECONNABORTED' }
Generated by this simplified code (after I tried to debug the issue):
res.sendFile(res.locals.physicalUrl,function (err) {
if (err)
console.log(err);
...
}
Many posts talk about this error resulting from not specifying the full path name. That is not the situation here. I do specify the full path and indeed the error is randomly generated. There are times when the page and all its subsequent links load perfectly and there are times when they do not. I tried to flush the cache and did not find any pattern to connect it with this.
This specific error appears to be a a generic term for socket connection getting aborted and is discussed in the context of other applications like FTP.
Having realized that the node worker threads can be increased, I tried to do so using:
process.env.UV_THREADPOOL_SIZE = 20;
However, my understanding is that even absent this, at most the file transfer may have to wait for a worker thread to be free and not get aborted. I am not talking about big files here, all files are less than 1 MB.
I have a gut feeling that this has nothing to do with node directly.
Please point to any other possibilities (node or otherwise) to handle this error. Also, any other indirect solutions? Retrying a few times could be one but that would be clumsy. EDIT: No, I cannot retry. Headers are already sent with the error!
A SIDE NOTE:
Many examples on the use of sendFile skip using the callback thereby giving the impression that it is a synchronous call. It is not. Do use the callback at all times, check for success and only then move on to the "next" middleware or take appropriate steps if the send fails for whatever reason. Not doing so can make it difficult to debug the consequences in an asynchronous environment.
See https://stackoverflow.com/a/36949631/2798152
Could it be possible that in some cases you terminate the connection by calling res.end before the asynchronous call to res.sendFile ends?
If that's not the case - can you pastebin more of your application code?
Uninstalling and Re-installing MongoDB solved this for me.
I was facing the same problem. It started happening when I had to force restart my laptop because it became unresponsive. On restarting, trying to connect to mongo server using nodejs, always threw ECONNABORTED error

Resources