I am using lambda as an ETL tool to process raw files coming in the s3 bucket.
As time will pass, functionality of lambda function will grow.
Each month, I will change lambda function. so, I want to publish version 1,2,3
How do I make the s3 bucket trigger particular version of lambda for the files ?
How do I test this functionality of production vs test in this case ?
From AWS Lambda function aliases - Documentation:
When you use a resource-based policy to give a service, resource, or account access to your function, the scope of that permission depends on whether you applied it to an alias, to a version, or to the function. If you use an alias name (such as helloworld:PROD), the permission is valid only for invoking the helloworld function using the alias ARN. You get a permission error if you use a version ARN or the function ARN. This includes the version ARN that the alias points to.
For example, the following AWS CLI command grants Amazon S3 permissions to invoke the PROD alias of the helloworld Lambda function. Note that the --qualifier parameter specifies the alias name.
$ aws lambda add-permission --function-name helloworld \
--qualifier PROD --statement-id 1 --principal s3.amazonaws.com --action lambda:InvokeFunction \
--source-arn arn:aws:s3:::examplebucket --source-account 123456789012
In this case, Amazon S3 is now able to invoke the PROD alias. Lambda can then execute the helloworld Lambda function version that the PROD alias references. For this to work correctly, you must use the PROD alias ARN in the S3 bucket's notification configuration.
How do I make the s3 bucket trigger particular version of lambda for the files ?
Best practice is not to point to lambda versions, but to use lambda alias which will point to the version you will configure. You can just append the alias name after the ARN of the Lambda.
arn:aws:lambdaName:aliasName
How do I test this functionality of production vs test in this case ?
You can trigger the same event multiple times with different lambda aliases (like a production version and a testing one)
Example of multiple event notifications
Related
I've go through hundreds of blogs/videos/resources but nowhere it mentions how to create a simple lambda function for Nodejs REST API locally using vscode, AWS toolkit extension and AWS cli. Is there any way where I can create a simple nodejs endpoint on my local and run using above and not serverless or SAM?( There's some internal restrictions hence I can't use them)
What you need is to set up a API gateway and event trigger for your lambda that triggers whenever a HTTP request comes in. So here are steps
Look into serverless framwork where you will define a serverless.yaml file that will have configuration to mention how your lambda will get invoked (In this case, its a HTTP event)
In your IDE of choice, use serverless-offline npm package
Your IDE config will look something like this (This example uses IntelliJ IDE)
IDE config to start up Lambda in local
Once you start up the service in local, you should be able to hit the REST endpoint in local using any rest client like Postman
Instead of (4) above you could also directly invoke your lambda function in local using AWS CLI like aws lambda invoke /dev/null \ --endpoint-url http://localhost:3002 \ --function-name <Your lambda function name> \ --payload '{<Your payload>}'
In my Serverless web-app (nodeJS) which supports multi tenancy, I have the following architecture:
Layer of controller - each controller is a lambda function (separated repository)
Layer of service - each service is a lambda function (another separated repository) which also calling to Dynamo DB.
Currently the controller is calling the service lambda using http (development purposes only) and we want to make it better using aws-sdk with lambda.invoke() or Step functions.
In case that we will use lambda.invoke(), there is a need to have a stable ARN per each lambda function and use it over other lambda's.
My question is, how can I have an ARN per each tenant+lambda and how can I maintain it?
In other case which we will use step functions, I wanted to know if its suitable for this kind of architecture ?
You don't need to maintain the lambda ARN if you know where to look for the current one. You can use an export from a CloudFormation deck or ssm parameter or DynamoDB or anything really. Even for the step function you can redeploy with using the CloudFormation output exports and it will point to the correct ARNs.
If I want to call AWS SES from AWS Lambda, I normally write the following when instantiating the AWS Helper Class:
var ses = new aws.SES({apiVersion: '2010-12-01', region: 'eu-west-1'});
I'm wondering, do I actually need to specify the AWS Region? Or will the AWS SES helper class just run in the region where the AWS Lambda Function is running.
What is the best practice here? Might I encounter problems later if I omit this?
I have always specified the region for the sake of being explicit. I went and changed one of my NodeJS Lambda functions using SNS to using an empty constructor instead of providing region and deployed it...it appears to still work. It looks like the service will try to run in the region of the lambda function it is being called from. I imagine the IAM role for the lambda function would play a part as well. As far as best practice, I think it is best to be explicit when possible assuming it isn't creating a ton of overhead/hassle. The problem you risk running into in the future is the use of a resource that isn't in certain regions.
I need to spin up an instance using Lambda on S3 trigger. Lambda has to spin up an EC2 and trigger a user data script.
I have an aws cli something like aws —region use-east-1 s3 cp s3://mybucket/test.txt /file/
Looking for python boto3 implementation.Since lambda is new to me, can someone share if its doable?
One way is Lambda runs CFT and UserData is part of CFT, but think there should be an easier way to achieve this.
Just include UserData parameter in your Boto3 function.
You should use a code like that:
ec2.create_instances(
ImageId='<ami-image-id>',
InstanceType='t1.micro',
UserData='string',
....
If you don't need to create, but just run, you should use:
ec2.client.run_instances(
...
UserData='string',
...
You can see all arguments that create_instance and run_instances support in:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances
I would like to call the aws s3 sync command from within an AWS Lambda function with a runtime version of Python 3.6. How can I do this?
Why don't you just use the included boto3 SDK?
boto3 does not have an equivalent to the sync command
boto3 does not automatically find MIME types ("If you do not provide anything for ContentType to ExtraArgs, the end content type will always be binary/octet-stream.")
aws cli does automatically find MIME types ("By default the mime type of a file is guessed when it is uploaded")
Architecturally this doesn't make sense!
For my use case I think it makes sense architecturally and financially, but I'm open to alternatives. My Lambda function:
downloads Git and Hugo
downloads my repository
runs Hugo to generate my small (<100 pages) website
uploads the generated files to s3
Right now, I'm able to do all of the above on a 1536 MB (the most powerful) Lambda function in around 1-2 seconds. This function is only triggered when I commit changes to my website, so it's inexpensive to run.
Maybe it is already installed in the Lambda environment?
As of the time of this writing, it is not.
From Running aws-cli Commands Inside An AWS Lambda Function:
import subprocess
command = ["./aws", "s3", "sync", "--acl", "public-read", "--delete",
source_dir + "/", "s3://" + to_bucket + "/"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))
The AWS CLI isn't installed by default on Lambda, so you have to include it in your deployment. Despite running in a Python 3.6 Lambda environment, Python 2.7 is still available in the environment, so the approach outlined in the article will continue to work.
To experiment on Lambda systems, take a look at lambdash.