Terraform: how to concatenate lists of AWS arns? - terraform

I want to create a iam_policy_arn_list in terraform where the list consists of the "FullAccess" arns of existing AWS policies, and the arn of a policy that I create on the fly. (I'm trying to create a Lambda function that can read/write to only a specified bucket.) If I only use existing AWS policies, then the following ingredients in my setup work:
variable "iam_policy_arn_list" {
type = list(string)
description = "IAM Policies to be attached to role"
default = [
"arn:aws:iam::aws:policy/CloudWatchFullAccess",
"arn:aws:iam::aws:policy/AmazonSESFullAccess",
"arn:aws:iam::aws:policy/AmazonS3FullAccess"
]
}
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = "${var.prefix}${var.role_name}"
count = length(var.iam_policy_arn_list)
policy_arn = var.iam_policy_arn_list[count.index]
depends_on = [aws_iam_role.iam_for_lambda]
}
But now I want to remove "arn:aws:iam::aws:policy/AmazonS3FullAccess" and replace it with the arn of a policy that I create on the fly that lets the Lambda function only access a specified S3 bucket. Where I am stuck is how to end up with a list variable of the rough form:
variable "iam_policy_arn_list" {
type = list(string)
description = "IAM Policies to be attached to role"
default = [
"arn:aws:iam::aws:policy/CloudWatchFullAccess",
"arn:aws:iam::aws:policy/AmazonSESFullAccess",
arn_of_the_policy_I_create_on_the_fly
]
}
... because the concat function will not work when defining variables. I have tried using the concat function elsewhere, but nothing seems to work. E.g. I tried:
resource "aws_iam_policy" "specific_s3_bucket_policy" {
name = "my_name"
description = "Grant access to one specific S3 bucket"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"s3:ListAllMyBuckets",
"s3:ListBucket"
],
"Resource" : "*"
},
{
"Effect" : "Allow",
"Action" : [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource" : "arn:aws:s3:::${var.S3_BUCKET_NAME}/*"
}
]
})
}
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = "${var.prefix}${var.role_name}"
count = length(var.iam_policy_arn_list)
policy_arn = concat(var.iam_policy_arn_list, [aws_iam_policy.specific_s3_bucket_policy.arn])[count.index]
depends_on = [aws_iam_role.iam_for_lambda]
}
... but this does not work. Suggestions?

Given the following iam_policy_arn_list:
variable "iam_policy_arn_list" {
type = list(string)
description = "IAM Policies to be attached to role"
default = [
"arn:aws:iam::aws:policy/CloudWatchFullAccess",
"arn:aws:iam::aws:policy/AmazonSESFullAccess",
]
}
Then create a local value like this:
locals {
combined_iam_policy_arn_list = concat(var.iam_policy_arn_list, [aws_iam_policy.specific_s3_bucket_policy.arn])
}
And then apply it like this:
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = "${var.prefix}${var.role_name}"
count = length(local.combined_iam_policy_arn_list)
policy_arn = local.combined_iam_policy_arn_list[count.index]
depends_on = [aws_iam_role.iam_for_lambda]
}

Related

Separate the AWS IAM policy and reference and attach it in a different folder via Terraform

I have sample code below which creates an IAM role, a policy document, attachment of policy document and then the attachment of that policy to role.
resource "aws_iam_role" "aws_snsANDsqsTeam" {
name = "aws_snsANDsqsTeam"
assume_role_policy = data.aws_iam_policy_document.production-okta-trust-relationship.json
}
data "aws_iam_policy_document" "sns-and-sqs-policy" {
statement {
sid = "AllowToPublishToSns"
effect = "Allow"
actions = [
"sns:Publish",
]
resources = [
data.resource.arn,
]
}
statement {
sid = "AllowToSubscribeFromSqs"
effect = "Allow"
actions = [
"sqs:changeMessageVisibility*",
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:GetQueue*",
"sqs:DeleteMessage",
]
resources = [
data.resource.arn,
]
}
}
resource "aws_iam_policy" "sns-and-sqs" {
name = "sns-and-sqs-policy"
policy = data.aws_iam_policy_document.sns-and-sqs-policy.json
}
resource "aws_iam_role_policy_attachment" "sns-and-sqs-role" {
role = "aws_snsANDsqsTeam"
policy_arn = aws_iam_policy.sns-and-sqs.arn
}
Now below is the directory tree that I am trying to get
Now I want the policy document and policy code to be moved to the developer.tf file under shared/iam folder so it will look like this
data "aws_iam_policy_document" "sns-and-sqs-policy" {
statement {
sid = "AllowToPublishToSns"
effect = "Allow"
actions = [
"sns:Publish",
]
resources = [
data.resource.arn,
]
}
statement {
sid = "AllowToSubscribeFromSqs"
effect = "Allow"
actions = [
"sqs:changeMessageVisibility*",
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:GetQueue*",
"sqs:DeleteMessage",
]
resources = [
data.resource.arn,
]
}
}
resource "aws_iam_policy" "sns-and-sqs" {
name = "sns-and-sqs-policy"
policy = data.aws_iam_policy_document.sns-and-sqs-policy.json
}
and have the role creation and policy attachment code in main.tf file under iam-platform-security folder, so the code will look like this:
resource "aws_iam_role" "aws_snsANDsqsTeam" {
name = "aws_snsANDsqsTeam"
assume_role_policy = data.aws_iam_policy_document.production-okta-trust-relationship.json
}
resource "aws_iam_role_policy_attachment" "sns-and-sqs-role" {
role = "aws_snsANDsqsTeam"
policy_arn = aws_iam_policy.sns-and-sqs.arn
}
My Question is how can I reference a policy which is under shared/iam folder to attach it to a role I created in main.tf file under the folder iam-platform-security. The goal is to create policies separately in the shared/iam folder and roles under team/sub-team folders ( like iam-platform-security, iam-platform-architecture,iam-platform-debug etc etc) and then create attachments so policies remains separately as standalone.
Can somebody help me on this.
How can I reference the policy document in main.tf file in different directory.
You have to use modules so that you can separate your parent TF code from other code, such as your IAM related code in a different folder.

Terraform access map

I am trying to access all groups and create groups in the below terraform code. But I am facing error This object does not have an attribute named "groups". Is there any logic I am missing here in the resource "og" "example"
for_each=toset(flatten(local.instances[*].groups)). Thanks
locals {
instances = {
test1 = {
baseUrl = "url1"
subDomain = "sd1"
groups = [
"app1",
"app2",
],
}
test2 = {
baseUrl = "url2"
subDomain = "sd2"
groups = [
"t1",
"t2",
],
}
}
}
resource "og" "example" {
for_each = toset(flatten(local.instances[*].groups))
name = each.value
description = "${each.value}-access"
}
Your local variable is a map, not a list. So it should be:
for_each = toset(flatten(values(local.instances)[*].groups))

Terraform AWS IAM Iterate Over Rendered JSON Policies

How can I iterate over the JSON rendered data.aws_iam_policy_document documents within an aws_iam_policy?
data "aws_iam_policy_document" "role_1" {
statement {
sid = "CloudFront1"
actions = [
"cloudfront:ListDistributions",
"cloudfront:ListStreamingDistributions"
]
resources = ["*"]
}
}
data "aws_iam_policy_document" "role_2" {
statement {
sid = "CloudFront2"
actions = [
"cloudfront:CreateInvalidation",
"cloudfront:GetDistribution",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations"
]
resources = ["*"]
}
}
variable "role_policy_docs" {
type = list(string)
description = "Policies associated with Role"
default = [
"data.aws_iam_policy_document.role_1.json",
"data.aws_iam_policy_document.role_2.json",
]
}
locals {
role_policy_docs = { for s in var.role_policy_docs: index(var.role_policy_docs, s) => s}
}
resource "aws_iam_policy" "role" {
for_each = local.role_policy_docs
name = format("RolePolicy-%02d", each.key)
description = "Custom Policies for Role"
policy = each.value
}
resource "aws_iam_role_policy_attachment" "role" {
for_each = { for p in aws_iam_policy.role : p.name => p.arn }
role = aws_iam_role.role.name
policy_arn = each.value
}
This example has been reduced down to the very basics. The policy documents are dynamically generated with the source_json and override_json conventions. I cannot simply combine the statements into a single policy document.
Terraform Error:
Error: "policy" contains an invalid JSON policy
on role.tf line 35, in resource "aws_iam_policy" "role":
35: policy = each.value
This:
variable "role_policy_docs" {
type = list(string)
description = "Policies associated with Role"
default = [
"data.aws_iam_policy_document.role_1.json",
"data.aws_iam_policy_document.role_2.json",
]
}
Is literally defining those default values as strings, so what you're getting is this:
+ role_policy_docs = {
+ 0 = "data.aws_iam_policy_document.role_1.json"
+ 1 = "data.aws_iam_policy_document.role_2.json"
}
If you tried removing the quotations around the data blocks, it will not be valid because you cannot use variables in default definitions. Instead, assign your policy documents to a new local, and use that local in your for loop instead:
locals {
role_policies = [
data.aws_iam_policy_document.role_1.json,
data.aws_iam_policy_document.role_2.json,
]
role_policy_docs = {
for s in local.role_policies :
index(local.role_policies, s) => s
}
}

aws iam role id vs role name in terraform, when to use which?

In terraform's official site, they have an example like this (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy):
resource "aws_iam_role_policy" "test_policy" {
name = "test_policy"
role = aws_iam_role.test_role.id
# Terraform's "jsonencode" function converts a
# Terraform expression result to valid JSON syntax.
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_role" "test_role" {
name = "test_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
where they attach a policy to a role by setting the role id in the policy, namely:
role = aws_iam_role.test_role.id
But setting it this way didn't work for me in one of our team projects, I kept on getting errors (see details here Task role defined by Terraform not working correctly for ECS scheduled task). Eventually, I realized that I had to set it using role name like this in my policy:
role = aws_iam_role.my_role.name
But I do see instances in our other team projects where my coworkers are using role id. I wonder what are the differences between id and name in the context of terraform and when to use which.
As already pointed out, there is no difference between id and name. You can check it by simply outputting your role:
output "test" {
value = aws_iam_role.test_role
}
which shows that both id and name are set to test_role:
test = {
"arn" = "arn:aws:iam::xxxxxx:role/test_role"
"assume_role_policy" = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"},\"Action\":\"sts:AssumeRole\"}]}"
"create_date" = "2021-02-14T01:25:48Z"
"description" = ""
"force_detach_policies" = false
"id" = "test_role"
"max_session_duration" = 3600
"name" = "test_role"
"name_prefix" = tostring(null)
"path" = "/"
"permissions_boundary" = tostring(null)
"tags" = tomap({})
"unique_id" = "AROASZHPM3IXXHCEBQ6OD"
}

Unable to assume role and validate the specified targetGroupArn

I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
For me, the problem was that I forgot to attach the right policy to the service role. Attaching this AWS-managed policy helped: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole
For me, I was using output of previous command. But the output was empty hence target group arn was empty in the create service call.
I had the wrong role attached.
resource "aws_ecs_service" "ECSService" {
name = "stage-quotation"
cluster = aws_ecs_cluster.ECSCluster2.id
load_balancer {
target_group_arn = aws_lb_target_group.ElasticLoadBalancingV2TargetGroup2.arn
container_name = "stage-quotation"
container_port = 8000
}
desired_count = 1
task_definition = aws_ecs_task_definition.ECSTaskDefinition.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
iam_role = aws_iam_service_linked_role.IAMServiceLinkedRole4.arn #
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
health_check_grace_period_seconds = 0
scheduling_strategy = "REPLICA"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole2" {
aws_service_name = "ecs.application-autoscaling.amazonaws.com"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole4" {
aws_service_name = "ecs.amazonaws.com"
description = "Role to enable Amazon ECS to manage your cluster."
}
I accidentally used my role for application-autoscaling due to poor naming convention. The correct role we need to use is defined above as IAMServiceLinkedRole4.
In order to prevent the error:
Error: creating ECS Service (*****): InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
From my side is working with the following configuration:
Role Trusted relationship: Adding statement to Trusted Policy
{
"Sid": "ECSpermission",
"Effect": "Allow",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
Role Permissions:
Adding AWS manged policies:
AmazonEC2ContainerRegistryFullAccess
AmazonEC2ContainerServiceforEC2Role
Adding custom inline policy: ( I know permissions is so extensive)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Declare your custom role with the parameter iam_role in the resource "aws_ecs_service"
resource "aws_ecs_service" "team_deployment" {
name = local.ecs_task
cluster = data.terraform_remote_state.common_resources.outputs.ecs_cluster.id
task_definition = aws_ecs_task_definition.team_deployment.arn
launch_type = "EC2"
iam_role = "arn:aws:iam::****:role/my_custom_role"
desired_count = 3
enable_ecs_managed_tags = true
force_new_deployment = true
scheduling_strategy = "REPLICA"
wait_for_steady_state = false
load_balancer {
target_group_arn = data.terraform_remote_state.common_resources.outputs.target_group_api.arn
container_name = var.ecr_image_tag
container_port = var.ecr_image_port
}
}
Of course be careful with the parameter target_group_arn value. Must be the target group ARN. Then now is working fine!
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 2 changed, 0 destroyed.
Resolved by destroying my stack and re-deploying.

Resources