Do I need to specify the region when instantiating a AWS Helper Class in AWS Lambda? - node.js

If I want to call AWS SES from AWS Lambda, I normally write the following when instantiating the AWS Helper Class:
var ses = new aws.SES({apiVersion: '2010-12-01', region: 'eu-west-1'});
I'm wondering, do I actually need to specify the AWS Region? Or will the AWS SES helper class just run in the region where the AWS Lambda Function is running.
What is the best practice here? Might I encounter problems later if I omit this?

I have always specified the region for the sake of being explicit. I went and changed one of my NodeJS Lambda functions using SNS to using an empty constructor instead of providing region and deployed it...it appears to still work. It looks like the service will try to run in the region of the lambda function it is being called from. I imagine the IAM role for the lambda function would play a part as well. As far as best practice, I think it is best to be explicit when possible assuming it isn't creating a ton of overhead/hassle. The problem you risk running into in the future is the use of a resource that isn't in certain regions.

Related

terraform cdk equivalent for grantReadWriteData (available in aws-cdk)

I am new to Terraform and also CDKTF. I have worked with “regular” AWS CDK.
In AWS CDK you have methods like grantReadWriteData ( IAM principal example ). E.g. if you have a dynamodb table where you want to give a Lambda function readwrite permissions you can call something like this:
table.grantReadWriteData(postFunction);
Does anything like this exists on CDK TF or do we have to write those policy statements our selves and add them to a lambda function role?
i cant find much documentation in terraform for this
There isn't anything like that in terms of a fluent interface for libraries generated from a provider or module but I would definitely recommend looking into iam-floyd for a similar type of fluent interface.
Like this function table.grantReadWriteData(postFunction);
using AWS CDK L2 Construct Library method to help you generate iam policy and attach policy at lamdba Function execute role.
The L2 construct library of CDKTF is not yet widespread for now.
So you need to define permission like this way.
And if you want to use CDKTF to deploy/manage AWS Resource, maybe you can take a look https://www.terraform.io/cdktf/create-and-deploy/aws-adapter.

Best practice to call lambda to lambda with multi tenancy?

In my Serverless web-app (nodeJS) which supports multi tenancy, I have the following architecture:
Layer of controller - each controller is a lambda function (separated repository)
Layer of service - each service is a lambda function (another separated repository) which also calling to Dynamo DB.
Currently the controller is calling the service lambda using http (development purposes only) and we want to make it better using aws-sdk with lambda.invoke() or Step functions.
In case that we will use lambda.invoke(), there is a need to have a stable ARN per each lambda function and use it over other lambda's.
My question is, how can I have an ARN per each tenant+lambda and how can I maintain it?
In other case which we will use step functions, I wanted to know if its suitable for this kind of architecture ?
You don't need to maintain the lambda ARN if you know where to look for the current one. You can use an export from a CloudFormation deck or ssm parameter or DynamoDB or anything really. Even for the step function you can redeploy with using the CloudFormation output exports and it will point to the correct ARNs.

Is there a AWS CDK code available to enable WAF logging for Kinesis firehose delivery stream?

Anyone have Python CDK code to enable Amazon Kinesis Data Firehose delivery stream Logging in WAF? Any language CDK code is fine for my reference as I didn't find any proper syntax or examples to enable in official python CDK/api documentation nor in any blog.
From the existing documentation (as of CDK version 1.101 and by extension Cloudformation) there seems to be no way of doing this out of the box.
But there is API call which can be utilized with boto3 for example: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/wafv2.html#WAFV2.Client.put_logging_configuration
What you need to have in order to invoke the call:
ResourceArn of the web ACL
List of Kinesis Data Firehose ARN(s) which should receive the logs
This means that you can try using Custom Resource and implement this behavior. Given you have created Firehose and web ACL in the stack previously, use this to create Custom Resource:
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.custom_resources/README.html
crd_provider = custom_resources.Provider(
self,
"Custom resource provider",
on_event_handler=on_event_handler,
log_retention=aws_logs.RetentionDays.ONE_DAY
)
custom_resource = core.CustomResource(
self,
"WAF logging configurator",
service_token=crd_provider.service_token,
properties={
"ResourceArn": waf_rule.attr_arn,
"FirehoseARN": firehose.attr_arn
}
)
on_event_handler in this case is a lambda function which you need to implement.
It should be possible to simplify this further by using AwsSdkCall:
on_event_handler = AwsSdkCall(
action='PutLoggingConfiguration',
service='waf',
parameters={
'ResourceArn': waf_rule.attr_arn,
'LogDestinationConfigs': [
firehose.attr_arn,
]
)
This way you don't need to write your own lambda. But your use case might change and you might want to add some extra functionality to this logging configurator, so I'm showing both approaches.
Disclaimer: I haven't tested this exact code, rather this is an excerpt of similar code written by me to solve similar problem of circumventing the gap in Cloudformation coverage.
I don't have python CDK example, but I had it working in Typescript using CfnDeliverySteam and CfnLoggingConfiguration. I would imagine you can find matching class in python CDK.

How can I know whether it is running inside lambda?

I am deploying nodejs code to AWS lambda and I'd like to know how I can check whether it is running in lambda. Because I need to do something different in code between lambda and local.
AWS Lambda sets various runtime environment variables that you can leverage. You can use the following in Node.js, for example:
const isLambda = !!process.env.LAMBDA_TASK_ROOT;
console.log("Running on Lambda:", isLambda);
Note that the double bang !! converts a truthy/falsey object to a boolean (true/false).
I'd advise using a Lambda environment variable rather than attempting to check against any runtimes of the Lambda executing.
By doing this you can ensure that any infrastructure changes on the AWS side of Lambda will not affect your code.
It also allows you test it locally if you are trying to reproduce a scenario without the need to hardcode logic.

AWS Lambda spin up EC2 and triggers user data script

I need to spin up an instance using Lambda on S3 trigger. Lambda has to spin up an EC2 and trigger a user data script.
I have an aws cli something like aws —region use-east-1 s3 cp s3://mybucket/test.txt /file/
Looking for python boto3 implementation.Since lambda is new to me, can someone share if its doable?
One way is Lambda runs CFT and UserData is part of CFT, but think there should be an easier way to achieve this.
Just include UserData parameter in your Boto3 function.
You should use a code like that:
ec2.create_instances(
ImageId='<ami-image-id>',
InstanceType='t1.micro',
UserData='string',
....
If you don't need to create, but just run, you should use:
ec2.client.run_instances(
...
UserData='string',
...
You can see all arguments that create_instance and run_instances support in:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances

Resources