Moment-timezone can't load data in AWS Lambda - node.js

I'm using moment-timezone#0.5.23.
const moment = require('moment-timezone');
...
const now = moment().tz('America/Los_Angeles');
console.log(now.format('dddd');
This works well when I run it on my laptop. However, when I deploy the code to my AWS Lambda function running on Node 8.10, I see this in the log
Moment Timezone has no data for "America/Los_Angeles". See
http://momentjs.com/timezone/docs/#/data-loading/.
As the result, I end up with the time of either America/New_York or UTC instead of America/Los_Angeles.
I've tried to copied the packed data over and loaded it manually (moment.tz.load(require('./latest'));) but still got the same error.
Any way to make moment-timezone work properly on AWS Lambda?
Thanks,

Related

Lambda function gets stuck when calling RDS via SQLalchemy URI

I have a fast API application. Initially, I was passing my DB URI via ngrok tunnel like this in my SAM template. In this setup Lambda will be using my local machine's PSQL DB.
DbConnnectionString:
Type: String
Default: postgresql://<uname>:<pwd>#x.tcp.ngrok.io:PORT/DB
This is how I read the URI in my Python code
# config.py
DATABASE_URL = os.environ.get('DB_URI')
db_engine = create_engine(DATABASE_URL)
db_session = sessionmaker(autocommit=False, autoflush=False,bind=db_engine)
print(f"Configs initialized for {API_V1_STR}")
# app.py
# 3rd party
from fastapi import FastAPI
# Custom
from config.app_config import PROJECT_NAME, db_engine
from models.db_models import Base
print("Creating all database")
Base.metadata.create_all(bind=db_engine)
app = FastAPI(title=PROJECT_NAME)
print("APP created")
In this setup, everything seems to work as expected.
But whenever I replace the DB URL with RDS DB, suddenly the call gets stuck at create all database step as shown in the image below. when this happens the lambda always times out and throws exceptions.
If I run the code locally using uvicorn this error doesn't occur.
Everything works as expected.
When I use sam local invoke even with RDS URL, the API call works without any issues.
This problem occurs only while deployed in AWS Lambda.
I notice that configs are initialized twice in this setup, Once before START request ID and once after.
I have tried reading up on it but not clear what could I do to fix this. Any help would be much appreciated.
It was my bad!. I didn't pay attention to security groups. It was a connection timeout all along. Once I fixed the port access in Security groups, lambda started working as expected.

How to set ssm param locally for serverless offline

I recently starting working on serverless architecture. Here is example of serverless.xml for the same.
test:
name: test
handler: handler.lambda_handler
timeout: 6
environment:
APP_ID: ${ssm:/path/to/ssm/test~true}
Now when I am trying to run serverless offline command then it complains about ssm variable.
Following is the error that coming on console.
I want to run everything on my locally machine for development. Can someone help on this how I can solve this problem.
ServerlessError: Trying to populate non string value into a string for variable ${ssm:/path/to/ssm/test~true}. Please make sure the value of the property is a string.
at Variables.populateVariable (C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:464:13)
at Variables.renderMatches (C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:386:21)
at C:\Users\kumarn\AppData\Roaming\npm\node_modules\serverless\lib\classes\Variables.js:406:29
From previous event:
you can solve this by adding the plugin:
https://github.com/janders223/serverless-offline-ssm
if you're feeling more adventurous you can also use localstack https://github.com/localstack/localstack
note that free version does not support everything

Lambda which reads jpg/vector files from S3 and processes them using graphicsmagick

We have a lambda which reads jpg/vector files from S3 and processes them using graphicsmagick.
This lambda was working fine till today. But since today morning we are getting errors while processing vector images using grahicsmagick.
"Error: Command failed: identify: unable to load module /usr/lib64/ImageMagick-6.7.8/modules-Q16/coders/ps.la': file not found # error/module.c/OpenModule/1278.
identify: no decode delegate for this image format/tmp/magick-E-IdkwuE' # error/constitute.c/ReadImage/544."
The above error is occurring for certain .eps files (vector) while using the identify function of the gm module.
Could you please share your insights on this.
Please let us know whether any backend changes have gone through with the aws end for Imagemagick module recently which might have had an affect on this lambda.

Phantomjscloud not working with aws lambda nodejs

Feeling painful to create a AWS lambda function, I was able to deploy the same micro service easily with Google Cloud Function, when I ported the same service from GCF to lambda with some changes in handling function like context in aws lambda and deployed the .zip of the project. It started throwing an unknown error shown below. The lambda function works well in local environment,
{
"errorMessage": "callback called with Error argument, but there was a problem while retrieving one or more of its message, name, and stack"
}
and the logs showing a syntax error in the parent script where the code begins, but there is no syntax error in the index.js which I have confirmed by running node index.js, any way I have attached the code snippet of index.js at the bottom
START RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d Version:
$LATEST Syntax error in module 'index': SyntaxError
END RequestId: 7260c7a9-0adb-11e7-b923-aff6d9a52d2d
I started to narrow down the piece of software that is causing the problem, I have removed all the dependencies and started including one by one and ran the lambda each time uploading the zip and finally found the culprit that caused the problem, it is phantomjscloud that is causing the problem.
when I include const phantomJsCloud = require('phantomjscloud') it is throwing out that error, even my npm_modules have phantomjscloud module included. are there any known glitches between aws lambda and phanthomjscloud, no clue how to solve this, feel free to ask any information if you feel that I have missed any thing.
Here the code that works well without including const phantomJsCloud = require('phantomjscloud')
global.async = require('async');
global.ImageHelpers = require('./services/ImageHelpers');
global.SimpleStorage = require('./services/SimpleStorage');
global.uuid = require('uuid');
global.path = require('path');
const phantomJsCloud = require('phantomjscloud')
const aadhaarController = require('./controllers/Aadhaar')
exports.handler = (event, context) => {
// TODO implement
aadhaarController.generateAadhaarCard(event,context);
};
Error message from aws lambda function when phantomjscloud is included:
AWS uses node version 4.3 for which phantomjscloud was not supported, that is the reason it worked only with google cloud function which have a run time environment of 6.9.2, now it is fixed by the author, by any chance if you are seeing this answer you might be using some other version of node which is not supported by phantomjscloud, raising a github issue solved the problem

After NodeJS script has an error, it will not start again. AWS Lambda

I have a lambda function on the nodejs4.x runtime. If my script stops execution due to an error, lets say I try to get .length of an undefined object, then I can't start the script again. It's not even like the script runs and hits the same error, the script doesn't run. The lambda handler function is never called the second time.
This lambda function is the endpoint for Amazon Alexa. When I reupload the code (a zip file) then the system works again.
Is this some behavior of nodejs? Is the script ending prematurly corrupting the files so it cannot start again?
When the server hits an error I get this message Process exited before completing request
And then subsequent requests hit the timeout limit.
Important Edit
I have pinpointed the issue to NPM request. the module doesnt finish loading ie.
console.log('i see this message');
var request = require('request');
console.log('this never happens');
Couple of things that I know:
If lambda invocation fails, due to any reason, it will be invoked again (actually it will be retried at most 3 times).
However, this is only true for asynchronous invocations, there are two types of invocations.
Any external module that your lambda's code requires, must be included in the package that you deploy to the lambda, I have explained this simply in here.
You can write code that accesses a property of undefined variable, yes it will throw an exception, and if this invocation is asynchronous it will be retried 2 more times - which will fail too of course.
Since the Lambda function fails when calling require('request') I believe that the project has not been deployed correctly. request must be deployed with the Lambda function because it is not part of Node.js 4.3.2 (the current Lambda JavaScript runtime).
Make sure that:
require is added to your package.json file (e.g. by calling $ npm install require --save, see npm install for details).
You create a deployment package by zipping your project folder (including the node_modules folder).
That the deployment .zip is uploaded to your Lambda function.
So after contacting AWS through their forums, this turns out to be a bug. The container is not cleared upon an error, so the code has to be re-uploaded.
A solution is to make a cloudwatch alarm that fires another lambda function that re uploads the code automatically.
They are working on a fix.
Forum post: https://forums.aws.amazon.com/thread.jspa?threadID=238434&tstart=0
In fact there are many cases when Lambda becomes unresponsive, e.g.:
Parsing not valid json:
exports.handler = function(event, context, callback)
{
var nonValidJson = "Not even Json";
var jsonParse = JSON.parse(nonValidJson);
Accessing property of undefined variable:
exports.handler = function(event, context, callback)
{
var emptyObject = {};
var value = emptyObject.Item.Key;
Not closing mySql connection after accessing RDS leads to Lambda timeout and then it becomes non-responsive.
Making a lambda that reuploads the code can take a portion of time.
After some tests it's revealed that in fact Lambda tries to restart (reload the container?), there is just not enough time. If you set the timeout to be 10s, after ~4s of execution time Lambda starts working, and then in next runs comes to behave normally. I've also tried playing with setting:
context.callbackWaitsForEmptyEventLoop = false;
and putting all 'require' blocks inside handler, nothing really worked. So the good way to prevent Lambda becoming dead is setting bigger timeout, 10s should be more than enough as a workaround protection against this bug.

Resources