I am quite new in trying to develop lambdas with NodeJs, so this question might sound silly.
One of the limitations of lambdas is the size of the function / dependencies (250 MB) and I was wondering if aws-sdk (which has >45 MB)can be treated as a dev-dependency since it occupies 1/5 of the total size of a lambda.
I understand that this is required during development, but is it not the case that this already exists in the lambda container once deployed to AWS?
Any suggestion would help as all the articles that I browsed seem to install it as a prod dependency.
Absolutely, the aws-sdk is available by default as an NPM dependency inside of the lambda containers so if you leave it as a development dependency your code will still work inside of lambda.
Here you can see which lambda containers contain which version of the AWS SDK. So in case you really need a specific version or one that's not yet loaded onto the lambda containers, you can manually include your own.
Related
I'm using CDKTF version 0.9.4 to deploy two stacks associated with an app. The docs says I have to simply list'em all or use '*'.
Running cdktf deploy '*' I get: Can't find the given stack *. I then listed all of'em and received Unable to find remote state, what me think whether cross-dependency is available only for cloud usage.
On the other hand, this tells me that multiple deployment stack isn't available, even though this dude does it.
I'm using Python, so maybe that's the problem ?
Any help is appreciated.
The cross-stack dependency was declared in two ways (errors were the same):
Exposing the objects as property of the stack
first_stack = Lambda(app, "my-lambda")
second_stack = ApiGateway(app, "api_gateway", first_stack.lambda_function)
Using add_dependency() method. Not sure I'm doing this right, the former seems more appropriate
first_stack = Lambda(app, "my-lambda")
second_stack = ApiGateway(app, "api_gateway", first_stack.lambda_function)
second_stack.add_dependency(first_stack)
As kornshell93 already said, you need to update your cdktf version to 0.10 or higher since the feature was just recently introduced. In 0.9 you should be able to run cdktf deploy first-stack && cdktf deploy second-stack though, since cross stack references were in place already.
As a project, I am trying to create a CI/CD pipeline running inside an AWS Lambda application.
The problem I am facing is that AWS Lambda is missing some tools (for example xargs) that certain applications (for example Gradle) require to run properly:
/tmp/repo/gradlew: line 234: xargs: command not found
Or even more interestingly:
install: apt-get: command not found
How can I install the required tools to build the applications from within an AWS Lambda container?
How can I utilize layers to speed up those containers?
Aka, I assume I need to register that certain cli tools are present in mounted layers.
On windows, I would do this by (ab)using the PATH environment variable, but what is the recommended way to do this in Linux?
And how can I tell tools to look for their dependencies in those layers? to avoid errors like:
ld.gold: error: cannot find -lcurl
The best option as far as I can tell is to create a Docker image containing all the software that you require and provide this to the AWS Lambda service.
There is extensive documentation how to run Docker containers in AWS Lambda:
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
Personal note: While I like the idea of a challenge or proof-of-concept I'd recommend using one of the many CI/CD services out there instead of building one on your own. I can not think of any upside of this. AWS itself offers CI/CD solutions like AWS CodePipeline etc.
You might want to have a look at the following documentation:
https://aws.amazon.com/getting-started/hands-on/set-up-ci-cd-pipeline/
I have recently created an EFS instance for my lambda in order to host my project dependencies since they exceed the 250MB hard cap. I Managed to get my File system and EC2 up and running with the appropriate permission. I also configured my lamda to use the EFS. Now the only part i am confused about :
How to i import these dependencies from EFS into my lamda code.
Do i use require() with absolute path to the module?
Only found tutorials to do it in Python
As Ervin Said in the comments, Using Docker was the way to go about this
I have developed a nodejs based function/program and want to run it on AWS Lambda. The problem is that the size is greater than 50MB and AWS Lambda supports direct function code to be under 50MB.
Mainly on my code the node module are of 43MB and the actual code is around 7MB. So is there any way I can separate my node module from code, May be if we can store the node modules in S3 bucket and then access it on AWS Lambda? Any suggestions would be helpful. Thanks
P.S: Due to some dependencies issues I cant run this function as a Docker image on Lambda.
If you do not want or cannot use Docker packaging, you can zip up your node_modules into an S3 bucket.
Your handler (or the module containing your handler), can then download the zip archive and extract files to /tmp. Then, you require() your modules from there.
The above description make not be 100% accurate as there are many ways of doing it. But that's the general idea.
This is one deployment method that zappa, a tool for deploying Python/Django apps to AWS Lambda, has supported long before docker containers were allowed in Lambda.
https://github.com/Miserlou/Zappa/pull/548
You may use lambda layers which is a perfect fit for your use case. Sometime ago, we need to use facebook sdk for one of our project and we created a lambda layer for the facebook sdk(32 mb) and then the deployment package became only 4 KB.
It is stated as
Using layers can make it faster to deploy applications with the AWS Serverless Application Model (AWS SAM) or the Serverless framework. By moving runtime dependencies from your function code to a layer, this can help reduce the overall size of the archive uploaded during a deployment.
Single Lambda function can use up to five layers. The maximum size of the total unzipped function and all layers is 250 MB which is far beyond your limits.
Recently AWS released the processing service, lambda. It can be triggered in milliseconds and only supports NodeJS now.
I'm curious about how can they implement the resource isolation. If they use something like docker, it may take a few seconds to start a container. If they run the NodeJS code directly, how can they support different version of NodeJS? It will be the big problem if you want to support other programming languages.
According to the docs, Lambda currently (at the time of this writing) supports only v0.10.32. In the future they will likely have an option when creating the cloud function specify the language and version. Lambda will ensure it runs in the correct execution environment (which, by the way, is probably not Docker).