Terraform with ECS :Invalid arn syntax - terraform

i receive this error when running terraform apply ( i deploy a container using ecs task which connect to rds with terraform )
Error: creating ECS Task Definition (project_task): ClientException: Invalid arn syntax.
│
│ with module.ecs.aws_ecs_task_definition.project_task,
│ on modules/ecs/main.tf line 37, in resource "aws_ecs_task_definition" "project_task":
│ 37: resource "aws_ecs_task_definition" "project_task" {
│
as seen from the main.tf i declared the execution rule
data "aws_ecr_repository" "project_ecr_repo" {
name = "project-ecr-repo"
}
resource "aws_ecs_cluster" "project_cluster" {
name = "project-cluster"
}
data "aws_iam_policy_document" "ecs_task_execution_role" {
version = "2012-10-17"
statement {
sid = ""
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
# ECS task execution role
resource "aws_iam_role" "ecs_task_execution_role" {
name = "ecs_task_execution_role"
assume_role_policy = "${data.aws_iam_policy_document.ecs_task_execution_role.json}"
}
# ECS task execution role policy attachment
resource "aws_iam_role_policy_attachment" "ecs_task_execution_role" {
role = "${aws_iam_role.ecs_task_execution_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_ecs_task_definition" "project_task" {
family = "project_task"
container_definitions = file("container_def.json")
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
memory = 512
cpu = 256
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
}
resource "aws_ecs_service" "project_service" {
name = "project-service"
cluster = aws_ecs_cluster.project_cluster.id
task_definition = aws_ecs_task_definition.project_task.arn
launch_type = "FARGATE"
desired_count = 2
network_configuration {
subnets = var.vpc.public_subnets
assign_public_ip = true
}
}
and here is my container definition file
[
{
"name": "backend_feed",
"image": "639483503131.dkr.ecr.us-east-1.amazonaws.com/backend-feed:latest",
"cpu": 256,
"memory": 512,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080,
"protocol": "tcp"
}
],
"environmentFiles": [
{
"value": "https://myawsbucket-639483503131.s3.amazonaws.com/env_vars.json",
"type": "s3"
}
]
}
]
appreciate your help
Thank you
terrafrom apply -auto-approve
expected to create ecs task with the provided container specs

Your environmentFiles value is a web URL, while ECS expects an S3 object ARN. Also, the documentation says the environment file must have a .env extension.
So first you need to rename env_vars.json to env_vars.env, and the file can't be JSON format, it has to be in the format of one VARIABLE=VALUE per line.
Then you need to specify the environmentFiles value property as an ARN:
"value": "arn:aws:s3:::myawsbucket-639483503131/env_vars.env"

Related

Having trouble with a applying a bucket policy via Terraform

I had this workign at one point but I may have screwed something up or this is a bug. I thought maybe it was a race condition and tried a few depends_on but still no luck. I can't seem to figure this out but I do know S3 policies can be challenging with buckets and terraform. Does anyone see anything obvious I am doing wrong?
resource "aws_s3_bucket_policy" "ct-s3-bucket-policy" {
bucket = aws_s3_bucket.mylab-s3-bucket-ct.id
policy = "${data.aws_iam_policy_document.default.json}"
}
resource "aws_cloudtrail" "mylab-cloudtrail" {
name = "mylab-cloudtrail"
s3_bucket_name = aws_s3_bucket.mylab-s3-bucket-ct.id
s3_key_prefix = "CT"
include_global_service_events = true
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
}
resource "aws_s3_bucket" "mylab-s3-bucket-ct" {
bucket = "mylab-s3-bucket-ct-1231764516123"
force_destroy = true
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = aws_s3_bucket.mylab-s3-bucket-ct.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.s3-kms.arn
sse_algorithm = "aws:kms"
}
}
}
data "aws_iam_policy_document" "default" {
statement {
sid = "AWSCloudTrailAclCheck"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:GetBucketAcl",
]
resources = [
"arn:aws:s3:::${var.cloudtrailbucketname}",
]
}
statement {
sid = "AWSCloudTrailWrite"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:PutObject",
]
resources = [
"arn:aws:s3:::${var.cloudtrailbucketname}/*",
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control",
]
}
}
}
this is the error I see at the end. The bucket creates but the policy wont attach.
╷
│ Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│ status code: 400, request id: HAK8J85M98TGTHQ4, host id: Qn2mqAJ+oKcFiCD52KfLG+10/binhRn2YUQX6MARTbW4MbV4n+P5neAXg8ikB7itINHOL07DV+I=
│
│ with aws_s3_bucket_policy.ct-s3-bucket-policy,
│ on main.tf line 126, in resource "aws_s3_bucket_policy" "ct-s3-bucket-policy":
│ 126: resource "aws_s3_bucket_policy" "ct-s3-bucket-policy" {
│
╵
╷
│ Error: Error creating CloudTrail: InsufficientS3BucketPolicyException: Incorrect S3 bucket policy is detected for bucket: mylab-s3-bucket-ct-1231764516123
│
│ with aws_cloudtrail.mylab-cloudtrail,
│ on main.tf line 131, in resource "aws_cloudtrail" "mylab-cloudtrail":
│ 131: resource "aws_cloudtrail" "mylab-cloudtrail" {
│
EDIT: For clarity, this ONLY happens with applied, planning works fine.
I believe you have a to have a dependency between the bucket policy and CloudTrail trail, like this:
resource "aws_cloudtrail" "mylab-cloudtrail" {
name = "mylab-cloudtrail"
s3_bucket_name = aws_s3_bucket.mylab-s3-bucket-ct.id
s3_key_prefix = "CT"
include_global_service_events = true
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
depends_on = [
aws_s3_bucket_policy.ct-s3-bucket-policy
]
}
If you don't have this dependency, Terraform will try to create the trail before having the necessary policy attached to the bucket.
Also, probably you would want to reference the bucket name in the policy and avoid using a var.cloudtrailbucketname:
data "aws_iam_policy_document" "default" {
statement {
sid = "AWSCloudTrailAclCheck"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:GetBucketAcl",
]
resources = [
"arn:aws:s3:::${aws_s3_bucket.mylab-s3-bucket-ct.id}" # Get the bucket name
]
}
statement {
sid = "AWSCloudTrailWrite"
effect = "Allow"
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
actions = [
"s3:PutObject",
]
resources = [
"arn:aws:s3:::${aws_s3_bucket.mylab-s3-bucket-ct.id}/*", # Get the bucket name
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control",
]
}
}
}
Original resource call
"arn:aws:s3:::${var.cloudtrailbucketname}/*",
Changed to this and it worked. I reference it instead of building the string. For whatever reason, the JSON was malformed.
resources = ["${aws_s3_bucket.mylab-s3-bucket-ct.arn}/*"]
#Erin for helping me get to the right direction

Unsupported argument, An argument named "" is not expected here

I am getting the below error when run terraform plan, the idea of this IAM to allow Lambda to run another services in AWS (step-function) once the it will finish executing.
Why does terraform fail with "An argument named "" is not expected here"?
Terraform version
Terraform v0.12.31
The error
Error: Unsupported argument
on iam.tf line 246, in resource "aws_iam_role" "lambda_role":
246: managed_policy_arns = var.managed_policy_arns
An argument named "managed_policy_arns" is not expected here.
Error: Unsupported block type
on iam.tf line 260, in resource "aws_iam_role" "lambda_role":
260: inline_policy {
Blocks of type "inline_policy" are not expected here.
the code for iam.tf:-
resource "aws_iam_role" "lambda_role" {
name = "${var.name}-role"
managed_policy_arns = var.managed_policy_arns
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
inline_policy {
name = "step_function_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement: [
{
Effect: "Allow"
Action: ["states:StartExecution"]
Resource: "*"
}
]
})
}
}
For the future, I fixed this issue by using a higher version of aws provider
The provider.tf was like the following :-
provider "aws" {
region = var.region
version = "< 3.0"
}
Change it to be like this :-
provider "aws" {
region = var.region
version = "<= 3.37.0"
}

Terraform Unable to create backup of EC2 for Selecting Backups By Tag

I have a Terraform resource that creates a backup of an EC2 instance in AWS Backup. I am trying to choose my instances based on tags. So by referring to Terraform docs online (Selecting Backups By Tag), I created a resource that looks as below:
resource "aws_backup_selection" "select_lin_config" {
iam_role_arn = "arn:aws:iam::abc"
name = "lin_config"
plan_id = aws_backup_plan.bkp_plan_ec2.id
selection_tag {
type = "STRINGEQUALS"
key = "Name"
value = "config_lin1"
}
}
When I do a terraform apply, I am getting below error:
Error: error creating Backup Selection: InvalidParameterValueException: Invalid selection conditions Condition(conditionType=STRINGEQUALS, conditionKey=Name, conditionValue=config_lin1)
{
RespMetadata: {
StatusCode: 400,
RequestID: "587a331c-e218-4341-9de1-a69a3ef7ec21"
},
Code_: "ERROR_3309",
Context: "Condition(conditionType=STRINGEQUALS, conditionKey=Name, conditionValue=config_lin1)",
Message_: "Invalid selection conditions Condition(conditionType=STRINGEQUALS, conditionKey=Name, conditionValue=config_lin1)"
}
I used the following example almost as it is from Terraform documentation and it worked. Copy and paste the following into your Terraform code and try it out.
Just to be sure, you might want to upgrade the AWS provider to the latest version using terraform init -upgrade. My AWS provider version is 3.26.0.
resource "aws_backup_vault" "example" {
name = "example_backup_vault"
}
resource "aws_backup_plan" "example" {
name = "tf_example_backup_plan"
rule {
rule_name = "tf_example_backup_rule"
target_vault_name = aws_backup_vault.example.name
schedule = "cron(0 12 * * ? *)"
}
advanced_backup_setting {
backup_options = {
WindowsVSS = "enabled"
}
resource_type = "EC2"
}
}
resource "aws_iam_role" "example" {
name = "example"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["sts:AssumeRole"],
"Effect": "allow",
"Principal": {
"Service": ["backup.amazonaws.com"]
}
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "example" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup"
role = aws_iam_role.example.name
}
resource "aws_backup_selection" "example" {
iam_role_arn = aws_iam_role.example.arn
name = "tf_example_backup_selection"
plan_id = aws_backup_plan.example.id
selection_tag {
type = "STRINGEQUALS"
key = "foo"
value = "bar"
}
}

Terraform Error creating Lambda function: ResourceConflictException with the resource just created by Terraform apply

I am deploying a Lambda function in each AWS Region of our account and encountering weird issue where the Apply is failing with the following error message for some of the AWS Regions
Error while Terraform Apply
Error: Error creating Lambda function: ResourceConflictException: Function already exist: log-forwarder
{
RespMetadata: {
StatusCode: 409,
RequestID: "8cfd7260-7c4a-42d2-98c6-6619c7b2804f"
},
Message_: "Function already exist: log-forwarder",
Type: "User"
}
The above Lambda function has just been created by same Terraform Apply that is failing.
The terraform plan and init doesn't throw any errors about having TF config issues.
Both plan and init runs successfully.
Below is my directory structure
.
├── log_forwarder.tf
├── log_forwarder_lambdas
│   └── main.tf
└── providers.tf
Below is my providers.tf file
provider "aws" {
region = "us-east-1"
version = "3.9.0"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
version = "3.9.0"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
version = "3.9.0"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
version = "3.9.0"
}
provider "aws" {
alias = "ca-central-1"
region = "ca-central-1"
version = "3.9.0"
}
... with all the AWS Regions.
Below is the tf config of log_forwarder.tf
terraform {
required_version = "0.12.25"
backend "s3" {
All the backend Config
}
}
resource "aws_iam_role" "log_forwarder" {
name = "LogForwarder"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "log_forwarder" {
name = "LogForwarder"
role = aws_iam_role.log_forwarder.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:ListTags",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*",
"arn:aws:lambda:*"
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": "*"
},
{
"Sid": "AWSDatadogPermissionsForCloudtrail",
"Effect": "Allow",
"Action": ["s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:ListObjects"],
"Resource": [
"arn:aws:s3:::BucketName",
"arn:aws:s3:::BucketName/*"
]
}
]
}
EOF
}
module "DDLogForwarderUSEast1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "us-east-1"
}
module "DDLogForwarderUSEast2" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.us-east-2 }
region = "us-east-2"
}
module "DDLogForwarderUSWest1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.us-west-1 }
region = "us-west-1"
}
module "DDLogForwarderUSWest2" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "us-west-2"
providers = { aws = aws.us-west-2 }
}
module "DDLogForwarderAPEast1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
providers = { aws = aws.ap-east-1 }
region = "ap-east-1"
}
module "DDLogForwarderAPSouth1" {
source = "./log_forwarder_lambdas"
dd_log_forwarder_role = aws_iam_role.log_forwarder.arn
region = "ap-south-1"
providers = { aws = aws.ap-south-1 }
}
... All AWS Regions with different providers
TF Config of log_forwarder_lambdas/main.tf
variable "region" {}
variable "account_id" {
default = "AWS Account Id"
}
variable "dd_log_forwarder_role" {}
variable "exclude_at_match" {
default = "([A-Z]* RequestId: .*)"
}
data "aws_s3_bucket" "cloudtrail_bucket" {
count = var.region == "us-west-2" ? 1 : 0
bucket = "BucketName"
}
resource "aws_lambda_function" "log_forwarder" {
filename = "${path.cwd}/log_forwarder_lambdas/aws-dd-forwarder-3.16.3.zip"
function_name = "log-forwarder"
role = var.dd_log_forwarder_role
description = "Gathers logs from targetted Cloudwatch Log Groups and sends them to DataDog"
handler = "lambda_function.lambda_handler"
runtime = "python3.7"
timeout = 600
memory_size = 1024
layers = ["arn:aws:lambda:${var.region}:464622532012:layer:Datadog-Python37:11"]
environment {
variables = {
DD_ENHANCED_METRICS = false
EXCLUDE_AT_MATCH = var.exclude_at_match
}
}
}
resource "aws_cloudwatch_log_group" "log_forwarder" {
name = "/aws/lambda/${aws_lambda_function.log_forwarder.function_name}"
retention_in_days = 90
}
resource "aws_lambda_permission" "cloudtrail_bucket" {
count = var.region == "us-west-2" ? 1 : 0
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.log_forwarder.arn
principal = "s3.amazonaws.com"
source_arn = element(data.aws_s3_bucket.cloudtrail_bucket.*.arn, count.index)
}
resource "aws_s3_bucket_notification" "cloudtrail_bucket_notification" {
count = var.region == "us-west-2" ? 1 : 0
bucket = element(data.aws_s3_bucket.cloudtrail_bucket.*.id, count.index)
lambda_function {
lambda_function_arn = aws_lambda_function.log_forwarder.arn
events = ["s3:ObjectCreated:*"]
}
depends_on = [aws_lambda_permission.cloudtrail_bucket, aws_cloudwatch_log_group.log_forwarder]
}
I am using TF 0.12.25 in this case.
The things I have tried so far.
Remove the .terraform folder from the root module every time I run the Terraform init/plan/apply cycle
I have tried to refactor code as much as possible.
I am running the TF Plan/Apply cycle locally without any CI.
At first glance it looks as though the Lambda function may not be in your Terraform state (for whatever reason). Have you changed backends / deleted data off your backend?
Run a terraform show and/or terraform state show and see if the conflicting Lambda function is in your state.
If it is not, but it already exists in AWS, you can import it.
See here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#import
Update:
As per your coment, since the resource exists in AWS but not in the state, this is an expected error. (Terraform doesn't know the resource exists, therefore tries to create it; AWS knows it already exists, therefore returns an error.)
You have two choices:
Delete the resource in AWS and run Terraform again; or
Import the existing recource into Terraform (recomended).
Try something like:
terraform import module.DDLogForwarderUSEast1.aws_lambda_function.log-forwarder log-forwarder
(Make sure you have the correct provider/region set up if trying this for other regions!)

Unable to assume role and validate the specified targetGroupArn

I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
For me, the problem was that I forgot to attach the right policy to the service role. Attaching this AWS-managed policy helped: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole
For me, I was using output of previous command. But the output was empty hence target group arn was empty in the create service call.
I had the wrong role attached.
resource "aws_ecs_service" "ECSService" {
name = "stage-quotation"
cluster = aws_ecs_cluster.ECSCluster2.id
load_balancer {
target_group_arn = aws_lb_target_group.ElasticLoadBalancingV2TargetGroup2.arn
container_name = "stage-quotation"
container_port = 8000
}
desired_count = 1
task_definition = aws_ecs_task_definition.ECSTaskDefinition.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
iam_role = aws_iam_service_linked_role.IAMServiceLinkedRole4.arn #
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
health_check_grace_period_seconds = 0
scheduling_strategy = "REPLICA"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole2" {
aws_service_name = "ecs.application-autoscaling.amazonaws.com"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole4" {
aws_service_name = "ecs.amazonaws.com"
description = "Role to enable Amazon ECS to manage your cluster."
}
I accidentally used my role for application-autoscaling due to poor naming convention. The correct role we need to use is defined above as IAMServiceLinkedRole4.
In order to prevent the error:
Error: creating ECS Service (*****): InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
From my side is working with the following configuration:
Role Trusted relationship: Adding statement to Trusted Policy
{
"Sid": "ECSpermission",
"Effect": "Allow",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
Role Permissions:
Adding AWS manged policies:
AmazonEC2ContainerRegistryFullAccess
AmazonEC2ContainerServiceforEC2Role
Adding custom inline policy: ( I know permissions is so extensive)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Declare your custom role with the parameter iam_role in the resource "aws_ecs_service"
resource "aws_ecs_service" "team_deployment" {
name = local.ecs_task
cluster = data.terraform_remote_state.common_resources.outputs.ecs_cluster.id
task_definition = aws_ecs_task_definition.team_deployment.arn
launch_type = "EC2"
iam_role = "arn:aws:iam::****:role/my_custom_role"
desired_count = 3
enable_ecs_managed_tags = true
force_new_deployment = true
scheduling_strategy = "REPLICA"
wait_for_steady_state = false
load_balancer {
target_group_arn = data.terraform_remote_state.common_resources.outputs.target_group_api.arn
container_name = var.ecr_image_tag
container_port = var.ecr_image_port
}
}
Of course be careful with the parameter target_group_arn value. Must be the target group ARN. Then now is working fine!
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 2 changed, 0 destroyed.
Resolved by destroying my stack and re-deploying.

Resources