I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
For me, the problem was that I forgot to attach the right policy to the service role. Attaching this AWS-managed policy helped: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole
For me, I was using output of previous command. But the output was empty hence target group arn was empty in the create service call.
I had the wrong role attached.
resource "aws_ecs_service" "ECSService" {
name = "stage-quotation"
cluster = aws_ecs_cluster.ECSCluster2.id
load_balancer {
target_group_arn = aws_lb_target_group.ElasticLoadBalancingV2TargetGroup2.arn
container_name = "stage-quotation"
container_port = 8000
}
desired_count = 1
task_definition = aws_ecs_task_definition.ECSTaskDefinition.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
iam_role = aws_iam_service_linked_role.IAMServiceLinkedRole4.arn #
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
health_check_grace_period_seconds = 0
scheduling_strategy = "REPLICA"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole2" {
aws_service_name = "ecs.application-autoscaling.amazonaws.com"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole4" {
aws_service_name = "ecs.amazonaws.com"
description = "Role to enable Amazon ECS to manage your cluster."
}
I accidentally used my role for application-autoscaling due to poor naming convention. The correct role we need to use is defined above as IAMServiceLinkedRole4.
In order to prevent the error:
Error: creating ECS Service (*****): InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
From my side is working with the following configuration:
Role Trusted relationship: Adding statement to Trusted Policy
{
"Sid": "ECSpermission",
"Effect": "Allow",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
Role Permissions:
Adding AWS manged policies:
AmazonEC2ContainerRegistryFullAccess
AmazonEC2ContainerServiceforEC2Role
Adding custom inline policy: ( I know permissions is so extensive)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Declare your custom role with the parameter iam_role in the resource "aws_ecs_service"
resource "aws_ecs_service" "team_deployment" {
name = local.ecs_task
cluster = data.terraform_remote_state.common_resources.outputs.ecs_cluster.id
task_definition = aws_ecs_task_definition.team_deployment.arn
launch_type = "EC2"
iam_role = "arn:aws:iam::****:role/my_custom_role"
desired_count = 3
enable_ecs_managed_tags = true
force_new_deployment = true
scheduling_strategy = "REPLICA"
wait_for_steady_state = false
load_balancer {
target_group_arn = data.terraform_remote_state.common_resources.outputs.target_group_api.arn
container_name = var.ecr_image_tag
container_port = var.ecr_image_port
}
}
Of course be careful with the parameter target_group_arn value. Must be the target group ARN. Then now is working fine!
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 2 changed, 0 destroyed.
Resolved by destroying my stack and re-deploying.
Related
Scenario
I'm having a problem where a Terraform module has defined the SQS queue and its policy within, but I'm getting the following error when trying to run terraform plan, apply and even refresh. Why?
Error
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument
User code
module "my_sqsqueue" {
source = "[redacted]"
sqs_name = "${local.some_name}"
sqs_policy = <<EOF
{
"Version": "2012-10-17",
"Id": "my_policy",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sqs:SendMessage",
"Resource": "${module.my_sqsqueue.sqs_queue_arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "[redacted]"
}
}
}
]
}
EOF
}
Module definition
resource "aws_sqs_queue_policy" "main_queue_policy" {
count = var.sqs_policy != "" ? 1 : 0
queue_url = aws_sqs_queue.main_queue.id
policy = var.sqs_policy
}
resource "aws_sqs_queue" "main_queue" {
content_based_deduplication = var.sqs_content_based_deduplication
delay_seconds = var.sqs_delay_seconds
fifo_queue = var.sqs_fifo_queue
kms_data_key_reuse_period_seconds = var.sqs_kms_data_key_reuse_period_seconds
kms_master_key_id = var.sqs_kms_master_key_id
max_message_size = var.sqs_max_message_size
message_retention_seconds = var.sqs_message_retention_seconds
name = var.sqs_name
receive_wait_time_seconds = var.sqs_receive_wait_time_seconds
visibility_timeout_seconds = var.sqs_visibility_timeout_seconds
tags = merge(
{
Name = var.sqs_name
},
local.default_tag_map
)
}
The Resource attribute on the sqs_policy is referencing an output field of the my_sqsqueue module, but that module itself is dependent on the sqs_policy.
So, either:
Temporarily remove the circular reference, setting the sqs_policy attribute to "", apply and then return the reference and apply again.
Manually define the reference if possible. Here, with AWS ARNs, that is possible, but this isn't always the case.
I saw a lot of topic opened for this kind of issue, but impossible to resolve this.
I'm trying to create AWS IAM role with attachments policy but I have always this issue :
Error: Error creating IAM Role test-role: MalformedPolicyDocument: JSON strings must not have leading spaces
I am fully aligned with the documentation :
Role : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
Policy attachment: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
Please find my configuration
resource "aws_iam_instance_profile" "test-role-profile" {
name = "test-role-profile"
role = aws_iam_role.test-role.name
}
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecr.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "test-role-policy" {
name = "test-role-policy"
description = "Test role policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:CreateRepository",
"ecr:DescribeImages",
"ecr:DescribeRegistry",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRegistryPolicy",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:ListTagsForResource",
"ecr:PutLifecyclePolicy",
"ecr:PutRegistryPolicy",
"ecr:SetRepositoryPolicy",
"ecr:StartLifecyclePolicyPreview",
"ecr:PutImage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "test-role-attach" {
role = aws_iam_role.test-role.name
policy_arn = aws_iam_policy.test-role-policy.arn
}
Version : Terraform v0.12.31
Anyone have an idea ?
Thks
You have some space before the first { character in the JSON string here:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
It should look like this instead:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
I personally recommend either switching to the jsonencode() method of building JSON strings, which you can see examples of in your first link, or using aws_iam_policy_document to construct your IAM policies.
I'm trying to add CloudWatch logging to my API Gateway and have followed posts like this one to create the following terraform:
resource "aws_iam_role" "iam_for_api_gateway" {
name = "${var.name}-api-gateway-role"
description = "custom IAM Limited Role created with \"APIGateway\" as the trusted entity"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
tags = var.resourceTags
}
resource "aws_cloudwatch_log_group" "api_gateway_log_group" {
name = "/aws/lambda/${var.name}-api-gateway"
retention_in_days = 14
}
resource "aws_iam_policy" "api_gateway_logging" {
name = "${var.name}-api-gateway-logging"
path = "/"
description = "IAM policy for logging from the api gateway"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "gateway_logs" {
role = aws_iam_role.iam_for_api_gateway.id
policy_arn = aws_iam_policy.api_gateway_logging.arn
}
resource "aws_api_gateway_rest_api" "root_api" {
name = "${var.name}-rest-api-service"
tags = var.resourceTags
}
# at this point there are various resource "aws_api_gateway_resource" "api" blocks, etc
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = aws_iam_role.iam_for_api_gateway.arn
}
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = aws_api_gateway_rest_api.root_api.id
stage_name = var.envName
depends_on = [
aws_cloudwatch_log_group.api_gateway_log_group,
aws_api_gateway_integration.lang_integration,
aws_api_gateway_account.demo
]
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_method_settings" "example" {
rest_api_id = aws_api_gateway_rest_api.root_api.id
stage_name = var.envName
method_path = "*/*"
settings {
metrics_enabled = true
logging_level = "ERROR"
}
}
But I am seeing no log entries generated for my API Gateway, though the log group is created.
I was previously getting this error:
Error: updating API Gateway Stage failed: BadRequestException: CloudWatch Logs role ARN must be set in account settings to enable logging
on ..\2-sub-modules\e-api-gateway\main.tf line 627, in resource "aws_api_gateway_method_settings" "example":
627: resource "aws_api_gateway_method_settings" "example" {
But then I updated the resource "aws_api_gateway_method_settings" "example" block (as shown above).
Now, I don't get the above error, but I also don't get any API Gateway logs.
What am I missing?
To fix the issue with "CloudWatch Logs role ARN must be set in account settings to enable logging" you should specify this role in API Gateway Account Settigns:
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = aws_iam_role.cloudwatch.arn
}
resource "aws_iam_role" "cloudwatch" {
name = "api_gateway_cloudwatch_global"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
Details: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_account
In addition to the information I provided in the comments, I would like to give a more precise answer to the question about why the logs are not displayed and how to display them in case someone runs into the same problem in the future.
With the logging_level property set to ERROR only the errors will be displayed in cloudwatch.
If we want to log all the request going through the gateway we have to use logging_level = "INFO". In order to display all the information related to the request like request URI, request headers, request body... we have to activate the data_trace_enabled property:
resource "aws_api_gateway_method_settings" "example" {
rest_api_id = aws_api_gateway_rest_api.root_api.id
stage_name = var.envName
method_path = "*/*"
settings {
data_trace_enabled = true
metrics_enabled = true
logging_level = "ERROR"
}
}
Terraform data_trace_enabled property matches with the Enable Detailed CloudWatch Metrics property from the AWS API Gateway console:
Currently there is a known limitation in the API Gateway and all the logs events larger than 1024bytes are truncated so keep that in mind if expect calls with many headers or large bodies.
API Gateway currently limits log events to 1024 bytes. Log events larger than 1024 bytes, such as request and response bodies, will be truncated by API Gateway before submission to CloudWatch Logs.
I have a Terraform resource that creates a backup of an EC2 instance in AWS Backup. I am trying to choose my instances based on tags. So by referring to Terraform docs online (Selecting Backups By Tag), I created a resource that looks as below:
resource "aws_backup_selection" "select_lin_config" {
iam_role_arn = "arn:aws:iam::abc"
name = "lin_config"
plan_id = aws_backup_plan.bkp_plan_ec2.id
selection_tag {
type = "STRINGEQUALS"
key = "Name"
value = "config_lin1"
}
}
When I do a terraform apply, I am getting below error:
Error: error creating Backup Selection: InvalidParameterValueException: Invalid selection conditions Condition(conditionType=STRINGEQUALS, conditionKey=Name, conditionValue=config_lin1)
{
RespMetadata: {
StatusCode: 400,
RequestID: "587a331c-e218-4341-9de1-a69a3ef7ec21"
},
Code_: "ERROR_3309",
Context: "Condition(conditionType=STRINGEQUALS, conditionKey=Name, conditionValue=config_lin1)",
Message_: "Invalid selection conditions Condition(conditionType=STRINGEQUALS, conditionKey=Name, conditionValue=config_lin1)"
}
I used the following example almost as it is from Terraform documentation and it worked. Copy and paste the following into your Terraform code and try it out.
Just to be sure, you might want to upgrade the AWS provider to the latest version using terraform init -upgrade. My AWS provider version is 3.26.0.
resource "aws_backup_vault" "example" {
name = "example_backup_vault"
}
resource "aws_backup_plan" "example" {
name = "tf_example_backup_plan"
rule {
rule_name = "tf_example_backup_rule"
target_vault_name = aws_backup_vault.example.name
schedule = "cron(0 12 * * ? *)"
}
advanced_backup_setting {
backup_options = {
WindowsVSS = "enabled"
}
resource_type = "EC2"
}
}
resource "aws_iam_role" "example" {
name = "example"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["sts:AssumeRole"],
"Effect": "allow",
"Principal": {
"Service": ["backup.amazonaws.com"]
}
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "example" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup"
role = aws_iam_role.example.name
}
resource "aws_backup_selection" "example" {
iam_role_arn = aws_iam_role.example.arn
name = "tf_example_backup_selection"
plan_id = aws_backup_plan.example.id
selection_tag {
type = "STRINGEQUALS"
key = "foo"
value = "bar"
}
}
I am deploying a Lambda function in each AWS Region of our account and encountering weird issue where the Apply is failing with the following error message for some of the AWS Regions
Error while Terraform Apply
Error: Error creating Lambda function: ResourceConflictException: Function already exist: log-forwarder
{
RespMetadata: {
StatusCode: 409,
RequestID: "8cfd7260-7c4a-42d2-98c6-6619c7b2804f"
},
Message_: "Function already exist: log-forwarder",
Type: "User"
}
The above Lambda function has just been created by same Terraform Apply that is failing.
The terraform plan and init doesn't throw any errors about having TF config issues.
Both plan and init runs successfully.
Below is my directory structure
.
├── log_forwarder.tf
├── log_forwarder_lambdas
│ └── main.tf
└── providers.tf
Below is my providers.tf file
provider "aws" {
region = "us-east-1"
version = "3.9.0"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
version = "3.9.0"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
version = "3.9.0"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
version = "3.9.0"
}
provider "aws" {
alias = "ca-central-1"
region = "ca-central-1"
version = "3.9.0"
}
... with all the AWS Regions.
Below is the tf config of log_forwarder.tf
terraform {
required_version = "0.12.25"
backend "s3" {
All the backend Config
}
}
resource "aws_iam_role" "log_forwarder" {
name = "LogForwarder"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "log_forwarder" {
name = "LogForwarder"
role = aws_iam_role.log_forwarder.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:ListTags",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*",
"arn:aws:lambda:*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": "*"
},
{
"Sid": "AWSDatadogPermissionsForCloudtrail",
"Effect": "Allow",
"Action": ["s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:ListObjects"],
"Resource": [
"arn:aws:s3:::BucketName",
"arn:aws:s3:::BucketName/*"
]
}
]
}
EOF
}
module "DDLogForwarderUSEast1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "us-east-1"
}
module "DDLogForwarderUSEast2" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.us-east-2 }
region = "us-east-2"
}
module "DDLogForwarderUSWest1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.us-west-1 }
region = "us-west-1"
}
module "DDLogForwarderUSWest2" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "us-west-2"
providers = { aws = aws.us-west-2 }
}
module "DDLogForwarderAPEast1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.ap-east-1 }
region = "ap-east-1"
}
module "DDLogForwarderAPSouth1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "ap-south-1"
providers = { aws = aws.ap-south-1 }
}
... All AWS Regions with different providers
TF Config of log_forwarder_lambdas/main.tf
variable "region" {}
variable "account_id" {
default = "AWS Account Id"
}
variable "dd_log_forwarder_role" {}
variable "exclude_at_match" {
default = "([A-Z]* RequestId: .*)"
}
data "aws_s3_bucket" "cloudtrail_bucket" {
count = var.region == "us-west-2" ? 1 : 0
bucket = "BucketName"
}
resource "aws_lambda_function" "log_forwarder" {
filename = "${path.cwd}/log_forwarder_lambdas/aws-dd-forwarder-3.16.3.zip"
function_name = "log-forwarder"
role = var.dd_log_forwarder_role
description = "Gathers logs from targetted Cloudwatch Log Groups and sends them to DataDog"
handler = "lambda_function.lambda_handler"
runtime = "python3.7"
timeout = 600
memory_size = 1024
layers = ["arn:aws:lambda:${var.region}:464622532012:layer:Datadog-Python37:11"]
environment {
variables = {
DD_ENHANCED_METRICS = false
EXCLUDE_AT_MATCH = var.exclude_at_match
}
}
}
resource "aws_cloudwatch_log_group" "log_forwarder" {
name = "/aws/lambda/${aws_lambda_function.log_forwarder.function_name}"
retention_in_days = 90
}
resource "aws_lambda_permission" "cloudtrail_bucket" {
count = var.region == "us-west-2" ? 1 : 0
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.log_forwarder.arn
principal = "s3.amazonaws.com"
source_arn = element(data.aws_s3_bucket.cloudtrail_bucket.*.arn, count.index)
}
resource "aws_s3_bucket_notification" "cloudtrail_bucket_notification" {
count = var.region == "us-west-2" ? 1 : 0
bucket = element(data.aws_s3_bucket.cloudtrail_bucket.*.id, count.index)
lambda_function {
lambda_function_arn = aws_lambda_function.log_forwarder.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [aws_lambda_permission.cloudtrail_bucket, aws_cloudwatch_log_group.log_forwarder]
}
I am using TF 0.12.25 in this case.
The things I have tried so far.
Remove the .terraform folder from the root module every time I run the Terraform init/plan/apply cycle
I have tried to refactor code as much as possible.
I am running the TF Plan/Apply cycle locally without any CI.
At first glance it looks as though the Lambda function may not be in your Terraform state (for whatever reason). Have you changed backends / deleted data off your backend?
Run a terraform show and/or terraform state show and see if the conflicting Lambda function is in your state.
If it is not, but it already exists in AWS, you can import it.
See here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#import
Update:
As per your coment, since the resource exists in AWS but not in the state, this is an expected error. (Terraform doesn't know the resource exists, therefore tries to create it; AWS knows it already exists, therefore returns an error.)
You have two choices:
Delete the resource in AWS and run Terraform again; or
Import the existing recource into Terraform (recomended).
Try something like:
terraform import module.DDLogForwarderUSEast1.aws_lambda_function.log-forwarder log-forwarder
(Make sure you have the correct provider/region set up if trying this for other regions!)