How do I identify a circular reference in Terraform? - terraform

Scenario
I'm having a problem where a Terraform module has defined the SQS queue and its policy within, but I'm getting the following error when trying to run terraform plan, apply and even refresh. Why?
Error
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument
User code
module "my_sqsqueue" {
source = "[redacted]"
sqs_name = "${local.some_name}"
sqs_policy = <<EOF
{
"Version": "2012-10-17",
"Id": "my_policy",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sqs:SendMessage",
"Resource": "${module.my_sqsqueue.sqs_queue_arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "[redacted]"
}
}
}
]
}
EOF
}
Module definition
resource "aws_sqs_queue_policy" "main_queue_policy" {
count = var.sqs_policy != "" ? 1 : 0
queue_url = aws_sqs_queue.main_queue.id
policy = var.sqs_policy
}
resource "aws_sqs_queue" "main_queue" {
content_based_deduplication = var.sqs_content_based_deduplication
delay_seconds = var.sqs_delay_seconds
fifo_queue = var.sqs_fifo_queue
kms_data_key_reuse_period_seconds = var.sqs_kms_data_key_reuse_period_seconds
kms_master_key_id = var.sqs_kms_master_key_id
max_message_size = var.sqs_max_message_size
message_retention_seconds = var.sqs_message_retention_seconds
name = var.sqs_name
receive_wait_time_seconds = var.sqs_receive_wait_time_seconds
visibility_timeout_seconds = var.sqs_visibility_timeout_seconds
tags = merge(
{
Name = var.sqs_name
},
local.default_tag_map
)
}

The Resource attribute on the sqs_policy is referencing an output field of the my_sqsqueue module, but that module itself is dependent on the sqs_policy.
So, either:
Temporarily remove the circular reference, setting the sqs_policy attribute to "", apply and then return the reference and apply again.
Manually define the reference if possible. Here, with AWS ARNs, that is possible, but this isn't always the case.

Related

how to access all elements of a list variable in the policy argument of aws_iam_user_policy resource in terraform

I have an aws_iam_user_policy resource in terraform as follows:
resource "aws_iam_user_policy" "pol" {
name = "policy"
user = aws_iam_user.singleuser.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:List*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::toybucket-development/*",
"arn:aws:s3:::toybucket-staging/*",
"arn:aws:s3:::toybucket-production/*"
]
}
]
}
EOF
}
The resources with development, staging and production are something I'm hoping to put in one line through using a list variable with the values development, staging and production and somehow looping through them, but I'm unsure of how to do this within the EOF. I know normally you can loop through such list variable but that's in normal terraform and not when you have this EOF with a string that represents a json. Would anyone know of a solution?
You can do this most easily with a Terraform template, and the templatefile function. The templatefile function invocation would appear like:
resource "aws_iam_user_policy" "pol" {
name = "policy"
user = aws_iam_user.singleuser.name
policy = templatefile("${path.module}/policy.tmpl", { envs = ["development", "staging", "production"] }
}
The documentation for the function is probably helpful.
The template would appear like:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:List*"
],
"Effect": "Allow",
"Resource": [
%{~ for env in envs ~}
"arn:aws:s3:::toybucket-${env}/*"%{ if env != envs[length(envs) - 1] },%{ endif }
%{~ endfor ~}
]
}
]
}
That check at the end for adding a comma only if it is not the last element to ensure JSON format syntax is not super great. However, there is no easy check in Terraform DSL for whether a list/slice (latter being implicitly derived from Golang) is the last element, and using jsonencode would require placing the entire ARN in the variable list.
If envs = ["arn:aws:s3:::toybucket-development/*", "arn:aws:s3:::toybucket-staging/*", "arn:aws:s3:::toybucket-production/*"], then you could jsonencode(envs).
I'm a big fan of using the data source aws_iam_policy_document for creating all kind of iam policy like resources. In combination with a locals using the formatlist function you can achieve your desired result.
This could look like this:
locals {
resource_arns = formatlist("arn:aws:s3:::toybucket-%s/*", ["development", "staging", "production"])
}
data "aws_iam_policy_document" "policy" {
statement {
actions = ["s3:List*"]
effect = "Allow"
resources = [local.resource_arns]
}
}
resource "aws_iam_polcy" "pol" {
name = "policy"
user = aws_iam_user.singleuser.name
policy = data.aws_iam_policy_document.policy.json
}

Terraform - The instance profile associated with the environment does not exist

I am trying to create an Elastic Beanstalk application but I am coming across the error:
The instance profile iam_for_beanstalk associated with the environment does not exist.
The role does exist as you can see here:
and it is being created via Terraform through the following code:
resource "aws_iam_role" "beanstalk" {
name = "iam_for_beanstalk"
assume_role_policy = file("${path.module}/assumerole.json")
}
assumerole.json looks like this:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "elasticbeanstalk.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "elasticbeanstalk"
}
}
}]
}
And here is how I try to associate it to the newly created application:
resource "aws_elastic_beanstalk_environment" "nodejs" {
application = aws_elastic_beanstalk_application.nodejs.name
name = "stackoverflow"
version_label = var.app_version
solution_stack_name = "64bit Amazon Linux 2 v5.4.6 running Node.js 14"
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "iam_for_beanstalk"
}...
I also tried to assign the name like below, but no success:
value = aws_iam_role.beanstalk.name
You also need to create an aws_iam_instance_profile resource:
resource "aws_iam_instance_profile" "beanstalk_instance_profile" {
name = "stack-overflow-example"
role = aws_iam_role.beanstalk.name
}

How to resolve json strings must not have leading spaces in terraform for AWS ecr IAM role

I saw a lot of topic opened for this kind of issue, but impossible to resolve this.
I'm trying to create AWS IAM role with attachments policy but I have always this issue :
Error: Error creating IAM Role test-role: MalformedPolicyDocument: JSON strings must not have leading spaces
I am fully aligned with the documentation :
Role : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
Policy attachment: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
Please find my configuration
resource "aws_iam_instance_profile" "test-role-profile" {
name = "test-role-profile"
role = aws_iam_role.test-role.name
}
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecr.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "test-role-policy" {
name = "test-role-policy"
description = "Test role policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:CreateRepository",
"ecr:DescribeImages",
"ecr:DescribeRegistry",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRegistryPolicy",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:ListTagsForResource",
"ecr:PutLifecyclePolicy",
"ecr:PutRegistryPolicy",
"ecr:SetRepositoryPolicy",
"ecr:StartLifecyclePolicyPreview",
"ecr:PutImage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "test-role-attach" {
role = aws_iam_role.test-role.name
policy_arn = aws_iam_policy.test-role-policy.arn
}
Version : Terraform v0.12.31
Anyone have an idea ?
Thks
You have some space before the first { character in the JSON string here:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
It should look like this instead:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
I personally recommend either switching to the jsonencode() method of building JSON strings, which you can see examples of in your first link, or using aws_iam_policy_document to construct your IAM policies.

How can I use a bucket as a variable within my Terraform bucket policy?

I want to create a list of S3 buckets and limit access to them to one user. That user should only have access to that bucket and no permissions to do other things in AWS.
I created my list as so (bucket names are not real in this example):
// List bucket names as a variable
variable "s3_bucket_name" {
type = "list"
default = [
"myfirstbucket",
"mysecondbucket",
...
]
}
Then I create a user.
// Create a user
resource "aws_iam_user" "aws_aim_users" {
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
path = "/"
}
I then create an access key.
// Create an access key
resource "aws_iam_access_key" "aws_iam_access_keys" {
count = "${length(var.s3_bucket_name)}"
user = "${var.s3_bucket_name[count.index]}"
// user = "${aws_iam_user.aws_aim_user.name}"
}
Now I create a user policy
// Add user policy
resource "aws_iam_user_policy" "aws_iam_user_policies" {
// user = "${aws_iam_user.aws_aim_user.name}"
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
user = "${var.s3_bucket_name[count.index]}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
...
],
"Resource": "${var.s3_bucket_name[count.index].arn}}"
}
]
}
EOF
}
Now I create my buckets with the user attached.
resource "aws_s3_bucket" "aws_s3_buckets" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
policy = <<POLICY
{
"Id": "Policy1574607242703",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574607238413",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": {
"${var.s3_bucket_name[count.index].arn}}"
"${var.s3_bucket_name[count.index].arn}/*}"
},
"Principal": {
"AWS": "${var.s3_bucket_name[count.index]}"
}
}
]
}
POLICY
tags = {
Name = "${var.s3_bucket_name[count.index]}"
Environment = "live"
}
}
The problem I have is it doesn't like where I have set the ARN in the policy by using my variable.
I also believe I need to use the user.arn not the bucket, although they should have the same name. What am I doing wrong here?
I think I see a few things that might be able to help you out.
The bucket policy resource options aren't going to use the arn of the bucket, they're looking for the actual bucket name so it would look like this "arn:aws:s3:::my-bucket".
I also see a few extra }'s in your setup there which could also be causing problems.
and,,, terraform is on version 0.12 which removes the need for the {$"resource.thing"} and replaces it with resource.thing instead. They have a helpful terraform 0.12upgrade command to run that upgrades the files which is nice. With terrafor 0.12 they adjusted how the resource creation like you have is being done. https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/

Unable to assume role and validate the specified targetGroupArn

I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
For me, the problem was that I forgot to attach the right policy to the service role. Attaching this AWS-managed policy helped: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole
For me, I was using output of previous command. But the output was empty hence target group arn was empty in the create service call.
I had the wrong role attached.
resource "aws_ecs_service" "ECSService" {
name = "stage-quotation"
cluster = aws_ecs_cluster.ECSCluster2.id
load_balancer {
target_group_arn = aws_lb_target_group.ElasticLoadBalancingV2TargetGroup2.arn
container_name = "stage-quotation"
container_port = 8000
}
desired_count = 1
task_definition = aws_ecs_task_definition.ECSTaskDefinition.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
iam_role = aws_iam_service_linked_role.IAMServiceLinkedRole4.arn #
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
health_check_grace_period_seconds = 0
scheduling_strategy = "REPLICA"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole2" {
aws_service_name = "ecs.application-autoscaling.amazonaws.com"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole4" {
aws_service_name = "ecs.amazonaws.com"
description = "Role to enable Amazon ECS to manage your cluster."
}
I accidentally used my role for application-autoscaling due to poor naming convention. The correct role we need to use is defined above as IAMServiceLinkedRole4.
In order to prevent the error:
Error: creating ECS Service (*****): InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
From my side is working with the following configuration:
Role Trusted relationship: Adding statement to Trusted Policy
{
"Sid": "ECSpermission",
"Effect": "Allow",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
Role Permissions:
Adding AWS manged policies:
AmazonEC2ContainerRegistryFullAccess
AmazonEC2ContainerServiceforEC2Role
Adding custom inline policy: ( I know permissions is so extensive)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Declare your custom role with the parameter iam_role in the resource "aws_ecs_service"
resource "aws_ecs_service" "team_deployment" {
name = local.ecs_task
cluster = data.terraform_remote_state.common_resources.outputs.ecs_cluster.id
task_definition = aws_ecs_task_definition.team_deployment.arn
launch_type = "EC2"
iam_role = "arn:aws:iam::****:role/my_custom_role"
desired_count = 3
enable_ecs_managed_tags = true
force_new_deployment = true
scheduling_strategy = "REPLICA"
wait_for_steady_state = false
load_balancer {
target_group_arn = data.terraform_remote_state.common_resources.outputs.target_group_api.arn
container_name = var.ecr_image_tag
container_port = var.ecr_image_port
}
}
Of course be careful with the parameter target_group_arn value. Must be the target group ARN. Then now is working fine!
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 2 changed, 0 destroyed.
Resolved by destroying my stack and re-deploying.

Resources