terraform cdk equivalent for grantReadWriteData (available in aws-cdk) - terraform

I am new to Terraform and also CDKTF. I have worked with “regular” AWS CDK.
In AWS CDK you have methods like grantReadWriteData ( IAM principal example ). E.g. if you have a dynamodb table where you want to give a Lambda function readwrite permissions you can call something like this:
table.grantReadWriteData(postFunction);
Does anything like this exists on CDK TF or do we have to write those policy statements our selves and add them to a lambda function role?
i cant find much documentation in terraform for this

There isn't anything like that in terms of a fluent interface for libraries generated from a provider or module but I would definitely recommend looking into iam-floyd for a similar type of fluent interface.

Like this function table.grantReadWriteData(postFunction);
using AWS CDK L2 Construct Library method to help you generate iam policy and attach policy at lamdba Function execute role.
The L2 construct library of CDKTF is not yet widespread for now.
So you need to define permission like this way.
And if you want to use CDKTF to deploy/manage AWS Resource, maybe you can take a look https://www.terraform.io/cdktf/create-and-deploy/aws-adapter.

Related

Best practice to call lambda to lambda with multi tenancy?

In my Serverless web-app (nodeJS) which supports multi tenancy, I have the following architecture:
Layer of controller - each controller is a lambda function (separated repository)
Layer of service - each service is a lambda function (another separated repository) which also calling to Dynamo DB.
Currently the controller is calling the service lambda using http (development purposes only) and we want to make it better using aws-sdk with lambda.invoke() or Step functions.
In case that we will use lambda.invoke(), there is a need to have a stable ARN per each lambda function and use it over other lambda's.
My question is, how can I have an ARN per each tenant+lambda and how can I maintain it?
In other case which we will use step functions, I wanted to know if its suitable for this kind of architecture ?
You don't need to maintain the lambda ARN if you know where to look for the current one. You can use an export from a CloudFormation deck or ssm parameter or DynamoDB or anything really. Even for the step function you can redeploy with using the CloudFormation output exports and it will point to the correct ARNs.

Is there a AWS CDK code available to enable WAF logging for Kinesis firehose delivery stream?

Anyone have Python CDK code to enable Amazon Kinesis Data Firehose delivery stream Logging in WAF? Any language CDK code is fine for my reference as I didn't find any proper syntax or examples to enable in official python CDK/api documentation nor in any blog.
From the existing documentation (as of CDK version 1.101 and by extension Cloudformation) there seems to be no way of doing this out of the box.
But there is API call which can be utilized with boto3 for example: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/wafv2.html#WAFV2.Client.put_logging_configuration
What you need to have in order to invoke the call:
ResourceArn of the web ACL
List of Kinesis Data Firehose ARN(s) which should receive the logs
This means that you can try using Custom Resource and implement this behavior. Given you have created Firehose and web ACL in the stack previously, use this to create Custom Resource:
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.custom_resources/README.html
crd_provider = custom_resources.Provider(
self,
"Custom resource provider",
on_event_handler=on_event_handler,
log_retention=aws_logs.RetentionDays.ONE_DAY
)
custom_resource = core.CustomResource(
self,
"WAF logging configurator",
service_token=crd_provider.service_token,
properties={
"ResourceArn": waf_rule.attr_arn,
"FirehoseARN": firehose.attr_arn
}
)
on_event_handler in this case is a lambda function which you need to implement.
It should be possible to simplify this further by using AwsSdkCall:
on_event_handler = AwsSdkCall(
action='PutLoggingConfiguration',
service='waf',
parameters={
'ResourceArn': waf_rule.attr_arn,
'LogDestinationConfigs': [
firehose.attr_arn,
]
)
This way you don't need to write your own lambda. But your use case might change and you might want to add some extra functionality to this logging configurator, so I'm showing both approaches.
Disclaimer: I haven't tested this exact code, rather this is an excerpt of similar code written by me to solve similar problem of circumventing the gap in Cloudformation coverage.
I don't have python CDK example, but I had it working in Typescript using CfnDeliverySteam and CfnLoggingConfiguration. I would imagine you can find matching class in python CDK.

Using the SSM send_command in Boto3

I'm trying to create a lambda function that will shutdown systemd services running on an EC2 instance. I think using the ssm client from the boto3 module probably is the best choice, and the specific command I was considering to use is the send_command(). Ideally I would like to use Ansible to shutdown the systemd service. So I'm trying to use the "AWS-ApplyAnsiblePlaybooks" It's here that I get stuck, it seems like the boto3 ssm client wants some parameters, I've tried following the boto3 documentation here, but really isn't clear on how it wants me to present the parameters, I found the parameters it's looking for inside the "AWS-ApplyAnsiblePlaybooks" document - but when I include them in my code, it tells me that the parameters are invalid. I also tried going to AWS' GitHub repository because I know they sometime have examples of code but they didn't have anything for the send_command(). I've upload a gist in case people are interested in what I've written so far, I would definitely be interested in understanding how others have gotten their Ansible playbooks to run using ssm via boto3 python scripts.
As far I can see by looking at the documentation for that SSM document and the code you shared in the gist. you need to add "SourceType":["S3"] and you need to have a path in the Source Info like:
{
"path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
}
so you need to adjust your global variable S3_DEVOPS_ANSIBLE_PLAYBOOKS.
Take a look at the CLI example from the doc link, it should give you ideas on how yo re-structure your Parameters:
aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
--targets Key=tag:TagKey,Values=TagValue \
--parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
--association-name "name" --schedule-expression "cron_or_rate_expression"

Do I need to specify the region when instantiating a AWS Helper Class in AWS Lambda?

If I want to call AWS SES from AWS Lambda, I normally write the following when instantiating the AWS Helper Class:
var ses = new aws.SES({apiVersion: '2010-12-01', region: 'eu-west-1'});
I'm wondering, do I actually need to specify the AWS Region? Or will the AWS SES helper class just run in the region where the AWS Lambda Function is running.
What is the best practice here? Might I encounter problems later if I omit this?
I have always specified the region for the sake of being explicit. I went and changed one of my NodeJS Lambda functions using SNS to using an empty constructor instead of providing region and deployed it...it appears to still work. It looks like the service will try to run in the region of the lambda function it is being called from. I imagine the IAM role for the lambda function would play a part as well. As far as best practice, I think it is best to be explicit when possible assuming it isn't creating a ton of overhead/hassle. The problem you risk running into in the future is the use of a resource that isn't in certain regions.

AWS Lambda spin up EC2 and triggers user data script

I need to spin up an instance using Lambda on S3 trigger. Lambda has to spin up an EC2 and trigger a user data script.
I have an aws cli something like aws —region use-east-1 s3 cp s3://mybucket/test.txt /file/
Looking for python boto3 implementation.Since lambda is new to me, can someone share if its doable?
One way is Lambda runs CFT and UserData is part of CFT, but think there should be an easier way to achieve this.
Just include UserData parameter in your Boto3 function.
You should use a code like that:
ec2.create_instances(
ImageId='<ami-image-id>',
InstanceType='t1.micro',
UserData='string',
....
If you don't need to create, but just run, you should use:
ec2.client.run_instances(
...
UserData='string',
...
You can see all arguments that create_instance and run_instances support in:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances

Resources