Phantomjscloud not working with aws lambda nodejs - node.js

Feeling painful to create a AWS lambda function, I was able to deploy the same micro service easily with Google Cloud Function, when I ported the same service from GCF to lambda with some changes in handling function like context in aws lambda and deployed the .zip of the project. It started throwing an unknown error shown below. The lambda function works well in local environment,
{
"errorMessage": "callback called with Error argument, but there was a problem while retrieving one or more of its message, name, and stack"
}
and the logs showing a syntax error in the parent script where the code begins, but there is no syntax error in the index.js which I have confirmed by running node index.js, any way I have attached the code snippet of index.js at the bottom
START RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d Version:
$LATEST Syntax error in module 'index': SyntaxError
END RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d
I started to narrow down the piece of software that is causing the problem, I have removed all the dependencies and started including one by one and ran the lambda each time uploading the zip and finally found the culprit that caused the problem, it is phantomjscloud that is causing the problem.
when I include const phantomJsCloud = require('phantomjscloud') it is throwing out that error, even my npm_modules have phantomjscloud module included. are there any known glitches between aws lambda and phanthomjscloud, no clue how to solve this, feel free to ask any information if you feel that I have missed any thing.
Here the code that works well without including const phantomJsCloud = require('phantomjscloud')
global.async = require('async');
global.ImageHelpers = require('./services/ImageHelpers');
global.SimpleStorage = require('./services/SimpleStorage');
global.uuid = require('uuid');
global.path = require('path');
const phantomJsCloud = require('phantomjscloud')
const aadhaarController = require('./controllers/Aadhaar')
exports.handler = (event, context) => {
// TODO implement
aadhaarController.generateAadhaarCard(event,context);
};
Error message from aws lambda function when phantomjscloud is included:

AWS uses node version 4.3 for which phantomjscloud was not supported, that is the reason it worked only with google cloud function which have a run time environment of 6.9.2, now it is fixed by the author, by any chance if you are seeing this answer you might be using some other version of node which is not supported by phantomjscloud, raising a github issue solved the problem

Related

AWS Xray NodeJS: How to Fix "Missing AWS Lambda trace data for X-Ray" messages on startup

We have several NodeJS Lambdas with AWS X-Ray with the following general setup.
process.env.AWS_XRAY_DEBUG_MODE = 'TRUE'
process.env.AWS_XRAY_TRACING_NAME = 'api-extensions'
console.log('Enabled XRAY debug mode')
import AWSXRay from 'aws-xray-sdk-core'
import { inputHandler } from './lib/handler'
import Sentry from './lib/sentry'
if (process.env.AWS_XRAY_ENABLED) {
AWSXRay.captureHTTPsGlobal(require('http'), true)
AWSXRay.captureHTTPsGlobal(require('https'), true)
AWSXRay.capturePromise() <----- causes the startup messages
}
export const handler = Sentry.wrapHandler(inputHandler)
All these lambda's give me one of the following errors on startup (during initialisation):
Missing AWS Lambda trace data for X-Ray. Ensure Active Tracing is enabled and no subsegments are created outside the function handler
or
Missing AWS Lambda trace data for X-Ray. Expected _X_AMZN_TRACE_ID to be set
My understanding is that we need capturePromise() for our axios dependency.|
I'm wondering where those messages come from and how I can fix them.
Relevant details (will add on demand/request):
AWS_XRAY_ENABLED is set
package version: aws-xray-sdk-core": "3.3.1"

Unable to deploy/update google cloud function

I have a Firebase project with 29 functions 2 with python and 27 with nodejs.
Modified 2 of them and now I can't deploy properly. I get an error log that send me to the logviewer and one of the errors is:
ERROR: build step 3
"us.gcr.io/fn-img/buildpacks/nodejs10/builder:nodejs10_20201201_20_RC00"
failed: step exited with non-zero status: 46
The functions keep on working, but I can't update/deploy properly. When I try to deploy them individually I get that error for both functions, but when I try to deploy ALL the functions I only get the error with those 2 functions the rest of the functions, that don't have any modification have no problem redeploying.
I checked the source code in the Cloud console and they have a warning icon saying that:
Function is active, but last deployment failed
The source code in the Cloud console is the same as the one I'm trying to deploy but the functions has the same functionality that before when I made the changes, the functions still works but can't update.
These are javascript functions that I deployed using the Firebase Node Sdk.
Any help?
EDIT I:
I reverted the changes on one of the functions that's been there for over 2 years and still have the same issue, can't update/deploy, that function triggers on storage.onFinalize().
The other function on firestore.onCreate()
EDIT II:
The newest function that I created is not in use, is part of a new feature in my android application, so I duplicated it, gave it different name and deployed without issues. In that case I could delete the original function without any issue as is not being used. But I can't do the same for the other function, the other function is constantly in use.

Moment-timezone can't load data in AWS Lambda

I'm using moment-timezone#0.5.23.
const moment = require('moment-timezone');
...
const now = moment().tz('America/Los_Angeles');
console.log(now.format('dddd');
This works well when I run it on my laptop. However, when I deploy the code to my AWS Lambda function running on Node 8.10, I see this in the log
Moment Timezone has no data for "America/Los_Angeles". See
http://momentjs.com/timezone/docs/#/data-loading/.
As the result, I end up with the time of either America/New_York or UTC instead of America/Los_Angeles.
I've tried to copied the packed data over and loaded it manually (moment.tz.load(require('./latest'));) but still got the same error.
Any way to make moment-timezone work properly on AWS Lambda?
Thanks,

google-cloud-pubsubTypeError: state.topic.publish is not a function

This is image from node red in terminal
I am working in the sphere of IoT, and want to push message to Pub/Sub in Google, but every time when I run my node-Red, I've got the following error:
25 Dec 18:40:49 - [error] [google-cloud-pubsub out:b2451409.071148] TypeError: state.topic.publish is not a function*
As a source code, I used pub/sub contribution in github, link:
https://github.com/GoogleCloudPlatform/node-red-contrib-google-cloud/blob/master/pubsub.js
It seems that code works fine with credentials and it does create new-topic in Google,in the case, when topic is not present in the cloud, however the message is not published to the topic. In the case of repeating messages in particular interval, the problem above is arising.
Does anyone know how to solve this problem?
I think you've been using an older version of the pubsub API:
const topic = pubsub.topic('YOUR-TOPIC-NAME')
topic.publish(yourData, callback)
The new API as documented here (https://cloud.google.com/pubsub/docs/publisher) looks like this:
const topic = pubsub.topic('YOUR-TOPIC-NAME')
const publisher = topic.publisher()
publisher.publish(dataBuffer, dataJSON, callback)
Hope this fixes your problem.

After NodeJS script has an error, it will not start again. AWS Lambda

I have a lambda function on the nodejs4.x runtime. If my script stops execution due to an error, lets say I try to get .length of an undefined object, then I can't start the script again. It's not even like the script runs and hits the same error, the script doesn't run. The lambda handler function is never called the second time.
This lambda function is the endpoint for Amazon Alexa. When I reupload the code (a zip file) then the system works again.
Is this some behavior of nodejs? Is the script ending prematurly corrupting the files so it cannot start again?
When the server hits an error I get this message Process exited before completing request
And then subsequent requests hit the timeout limit.
Important Edit
I have pinpointed the issue to NPM request. the module doesnt finish loading ie.
console.log('i see this message');
var request = require('request');
console.log('this never happens');
Couple of things that I know:
If lambda invocation fails, due to any reason, it will be invoked again (actually it will be retried at most 3 times).
However, this is only true for asynchronous invocations, there are two types of invocations.
Any external module that your lambda's code requires, must be included in the package that you deploy to the lambda, I have explained this simply in here.
You can write code that accesses a property of undefined variable, yes it will throw an exception, and if this invocation is asynchronous it will be retried 2 more times - which will fail too of course.
Since the Lambda function fails when calling require('request') I believe that the project has not been deployed correctly. request must be deployed with the Lambda function because it is not part of Node.js 4.3.2 (the current Lambda JavaScript runtime).
Make sure that:
require is added to your package.json file (e.g. by calling $ npm install require --save, see npm install for details).
You create a deployment package by zipping your project folder (including the node_modules folder).
That the deployment .zip is uploaded to your Lambda function.
So after contacting AWS through their forums, this turns out to be a bug. The container is not cleared upon an error, so the code has to be re-uploaded.
A solution is to make a cloudwatch alarm that fires another lambda function that re uploads the code automatically.
They are working on a fix.
Forum post: https://forums.aws.amazon.com/thread.jspa?threadID=238434&tstart=0
In fact there are many cases when Lambda becomes unresponsive, e.g.:
Parsing not valid json:
exports.handler = function(event, context, callback)
{
var nonValidJson = "Not even Json";
var jsonParse = JSON.parse(nonValidJson);
Accessing property of undefined variable:
exports.handler = function(event, context, callback)
{
var emptyObject = {};
var value = emptyObject.Item.Key;
Not closing mySql connection after accessing RDS leads to Lambda timeout and then it becomes non-responsive.
Making a lambda that reuploads the code can take a portion of time.
After some tests it's revealed that in fact Lambda tries to restart (reload the container?), there is just not enough time. If you set the timeout to be 10s, after ~4s of execution time Lambda starts working, and then in next runs comes to behave normally. I've also tried playing with setting:
context.callbackWaitsForEmptyEventLoop = false;
and putting all 'require' blocks inside handler, nothing really worked. So the good way to prevent Lambda becoming dead is setting bigger timeout, 10s should be more than enough as a workaround protection against this bug.

Resources