AWS Serverless Image Handler - Version 5 - Lambda running on Node.js 12 End of Life - node.js

I've recently taken over a project that has AWS Serverless Image Handler version 5 implemented. And I've been sent various emails from the client they received about Node.js v12 hitting end of life and the Lambda functions needing to run on a new Node.js runtime.
Having a looking through the AWS account, I've seen I can just switch the runtime to Node.js v14 or v16, but do I need to do any code updates?
Sorry a complete noob to Lambda, and the CloudFormation stacks etc
Thought I would ask before I jump down the rabbit hole and look at setting up my own copy of Serverless Image Handler and doing some testing, or even trying to implement version 6.

This depends of the code / imported libraries in the Serverless Image Handler Lambda.
I would check if there are breaking changes in Node14, that could affect the code.
This does not have anything to do with Lambda / CF itself.

Related

Is it required to put engines and buildpack in a nodejs aws lambda application?

I'm not sure what is the best practice when it comes to AWS lambdas. I have a node 14 lambda that has been running on server successfully. I use terraform to initialize and maintain the code. Do I need to add node engines specifically and buildpacks? It runs fine without them.
No, buildpacks are not required for using Lambda.
It is possible use AWS Code build to create container images using buildpacks, and shipping them to the Lambda container runtime. But that is an optional, and not well-paved path.

Amazon AWS S3 autoDeleteObjects lambda

Could you please help me understand how to specify the nodejs runtime version of the lambda function that gets automatically created by aws when a new data bucket with parameter
autoDeleteObjects: true is created?
I am using the following piece of code:
const autoDeleteBucketProps = { autoDeleteObjects: true, removalPolicy: >cdk.RemovalPolicy.DESTROY };
new Bucket(this, 'store', {
...bucketProps,
...autoDeleteBucketProps
});
This code automatically creates a lambda function with runtime version Node.js 12.x for autodeleting objects. However due to the fact that Amazon requires that we upgrade our lambda runtimes (ending support of v12 as described in Lambda runtime support policy), I am trying to a find a way to upgrade the runtime of this automatically created lambda to version 14.
I am using aws-cdk v1.152.0 which supports '#aws-cdk/aws-lambda' Runtime version v14. So why this lambda gets created with runtime v12? And how can it can be changed to v14, programmatically?
Thank you in advance.
I just updated one of our stacks from CDK 2.23.0 to 2.46.0 and the auto deletion lambda automatically updated to Node 14 runtime.
You said you were using CDK 1.152.0 and if for some reason you want to stick with V1, it should also update to the new runtime in 1.176.0, but I have not tested this myself. I was just reading the changelog notes of CDK.
Updating to CDK v2 was quite easy for us at least and I think v1 is nearing end-of-life so I suggest you move to v2 now or soon.
I think you should be able to update the runtime in the console or remake the function when v12 is no longer used.
You can find more details on the lambda runtimes here

Lambda Function NodeJS version always reverted to 12.x

I am running javascript Lambda functions that required NodeJS version 14.x. I can manually set this in the AWS Lambda console here (screenshot below), but every time I use the amplify CLI to push a change of the function, it gets reverted back to 12.x. I can't find any reference to the NodeJS version in the local amplify files, or online. Is there a way to keep it from reverting every time?
This is what the Lambda console option looks like, which I edit to 14.x constantly, but it changes back to 12.x:
Currently (26 July 2021), AWS Amplify only support NodeJS ver 12.x. Please see the Supported Lambda runtimes paragraph in the Amplify Docs for reference.
While #matsev's answer looks to be the official version supported by AWS, I noticed that while some of my functions continued to be reverted to 12.x, others stayed at 14.x even after a push. The ones that stayed at 14.x were my more recently created functions.
It turns out, you can edit the *.cloudformation-template.json for each function to set the nodejs version! This file did not appear in my vscode search due to a previous developer hiding it using files.exclude. This is likely a good best practice because the file is meant to be just edited using the amplify CLI, but there are obviously some settings that are not part of the CLI commands. Another example is the timeout parameter in here.
The file is located at: amplify/backend/function/FUNCTION_NAME/FUNCTION_NAME-cloudformation-template.json
The parameter to edit is: `Resources.LambdaFunction.Properties.Runtime = "nodejs14.x"

AWS Lambda Dev Workflow

I've been using AWS for a while now but am wondering about how to go about developing with Lambda. I'm a big fan of having server-less functions and letting Amazon handle the maintenance and have been using it for a while. My question: Is there a recommended workflow for version control and development?
I understand there's the ability to publish a new version in Lambda. And that you can point to specific versions in a service that calls it, such as API Gateway. I see API Gateway also has some nice abilities to partition who calls which version. i.e. Having a test API and also slowly rolling updates to say 10% of production API calls and scaling up slowly.
However, this feels a bit clunky for an actual version control system. Perhaps the functions are coded locally and uploaded using the AWS CLI and then everything is managed through a third party version control system (Github, Bitbucket, etc)? Can I deploy to new or existing versions of the function this way? That way I can maintain a separation of test and production functions.
Development also doesn't feel as nice through the editor in Lambda. Not to mention using custom packages require to upload anyways. Seems local development is the better solution. Trying to understand others workflows so I can improve mine.
How have you approached this issue in your experience?
I wrote roughly a dozen lambda functions that trigger based on S3 file write event or time, and make a HTTP req to an API to kickstart data processing jobs.
I don't think there's any gold standard. From my research, there are various approaches and frameworks out there. I decided that I didn't want to depend on any kind of frameworks like Serverless nor Apex because I didn't want to learn how to use those things on top of learning about Lambda. Instead I built out improvements organically based on my needs as I was developing a function.
To answer your question, here's my workflow.
Develop locally and git commit changes.
Mock test data and test locally using mocha and chai.
Run a bash script that creates a zip file compressing files to be deployed to AWS lambda.
Upload the zip file to AWS lambda.
You can have version control on your lambda using aws CodeCommit (much simpler than using an external git repository system, although you can do either). Here is a tutorial for setting up a CodePipeline for commit/build/deploy stages: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html
This example deploys an EC2 instance, so for the deploy portion for a lambda, see here
If you set up a pipeline you can have an initial commit stage, then a build stage that runs your unit tests and packages the code, and then a deploy stage (and potentially more stages if required). It's a very organized way of deploying lambda changes.
I would suggest you to have a look at SAM. SAM is a command line tool and a framework to help you to develop your serverless application. Using SAM, you can test your applications locally before to upload them to the cloud. It also support blue / green deployment and CI/CD workflows, starting automatically from github.
https://github.com/awslabs/aws-sam-cli

Online-Edit Amazon Lambda function with alexa-sdk

I am creating an Alexa skill and I am using Amazon Lambda to handle the intents. I found online several tutorials and decided to use NodeJs with the alexa-sdk. After installing the alexa-sdk with npm, the zipped archive occupied a disksize of ~6MB. If I upload it to amazon, it tells me
The deployment package of your Lambda function "..." is too large to enable inline code editing. However, you can still invoke your function right now.
My index.js has a size of < 4KB but the dependencies are large. If I want to change something, I have to zip it altogether (index.js and the folder with the depencencies "node_modules"), upload it to Amazon and wait till its processed, because online editing isn't available anymore. So every single change of the index.js wastes > 1 minute of my time to zip and upload it. Is there a possibility to use the alexa-sdk dependency (and other dependencies) without uploading the same code continually every time I am changing something? Is there a possibility to use the online-editing function though I am using large dependencies? I just want to edit the index.js.
If the size of your Lambda function's zipped deployment packages
exceeds 3MB, you will not be able to use the inline code editing
feature in the Lambda console. You can still use the console to invoke
your Lambda function.
Its mentioned here under AWS Lambda Deployment Limits
ASK-CLI
ASK Command Line Interface let you manage your Alexa skills and related AWS Lambda functions from your local machine. Once you set it up, you can make necessary changes in your Lambda code or skill and use deploy command to deploy a skill. The optional target will let you deploy the associated Lambda code.
ask deploy [--no-wait] [-t| --target <target>] [--force] [-p| --profile <profile>] [--debug]
More info about ASK CLI here and more about deploy command here

Resources