Terraform - The instance profile associated with the environment does not exist - terraform

I am trying to create an Elastic Beanstalk application but I am coming across the error:
The instance profile iam_for_beanstalk associated with the environment does not exist.
The role does exist as you can see here:
and it is being created via Terraform through the following code:
resource "aws_iam_role" "beanstalk" {
name = "iam_for_beanstalk"
assume_role_policy = file("${path.module}/assumerole.json")
}
assumerole.json looks like this:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "elasticbeanstalk.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "elasticbeanstalk"
}
}
}]
}
And here is how I try to associate it to the newly created application:
resource "aws_elastic_beanstalk_environment" "nodejs" {
application = aws_elastic_beanstalk_application.nodejs.name
name = "stackoverflow"
version_label = var.app_version
solution_stack_name = "64bit Amazon Linux 2 v5.4.6 running Node.js 14"
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "iam_for_beanstalk"
}...
I also tried to assign the name like below, but no success:
value = aws_iam_role.beanstalk.name

You also need to create an aws_iam_instance_profile resource:
resource "aws_iam_instance_profile" "beanstalk_instance_profile" {
name = "stack-overflow-example"
role = aws_iam_role.beanstalk.name
}

Related

How do I identify a circular reference in Terraform?

Scenario
I'm having a problem where a Terraform module has defined the SQS queue and its policy within, but I'm getting the following error when trying to run terraform plan, apply and even refresh. Why?
Error
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument
User code
module "my_sqsqueue" {
source = "[redacted]"
sqs_name = "${local.some_name}"
sqs_policy = <<EOF
{
"Version": "2012-10-17",
"Id": "my_policy",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sqs:SendMessage",
"Resource": "${module.my_sqsqueue.sqs_queue_arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "[redacted]"
}
}
}
]
}
EOF
}
Module definition
resource "aws_sqs_queue_policy" "main_queue_policy" {
count = var.sqs_policy != "" ? 1 : 0
queue_url = aws_sqs_queue.main_queue.id
policy = var.sqs_policy
}
resource "aws_sqs_queue" "main_queue" {
content_based_deduplication = var.sqs_content_based_deduplication
delay_seconds = var.sqs_delay_seconds
fifo_queue = var.sqs_fifo_queue
kms_data_key_reuse_period_seconds = var.sqs_kms_data_key_reuse_period_seconds
kms_master_key_id = var.sqs_kms_master_key_id
max_message_size = var.sqs_max_message_size
message_retention_seconds = var.sqs_message_retention_seconds
name = var.sqs_name
receive_wait_time_seconds = var.sqs_receive_wait_time_seconds
visibility_timeout_seconds = var.sqs_visibility_timeout_seconds
tags = merge(
{
Name = var.sqs_name
},
local.default_tag_map
)
}
The Resource attribute on the sqs_policy is referencing an output field of the my_sqsqueue module, but that module itself is dependent on the sqs_policy.
So, either:
Temporarily remove the circular reference, setting the sqs_policy attribute to "", apply and then return the reference and apply again.
Manually define the reference if possible. Here, with AWS ARNs, that is possible, but this isn't always the case.

How to resolve json strings must not have leading spaces in terraform for AWS ecr IAM role

I saw a lot of topic opened for this kind of issue, but impossible to resolve this.
I'm trying to create AWS IAM role with attachments policy but I have always this issue :
Error: Error creating IAM Role test-role: MalformedPolicyDocument: JSON strings must not have leading spaces
I am fully aligned with the documentation :
Role : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
Policy attachment: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
Please find my configuration
resource "aws_iam_instance_profile" "test-role-profile" {
name = "test-role-profile"
role = aws_iam_role.test-role.name
}
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecr.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "test-role-policy" {
name = "test-role-policy"
description = "Test role policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:CreateRepository",
"ecr:DescribeImages",
"ecr:DescribeRegistry",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetLifecyclePolicy",
"ecr:GetLifecyclePolicyPreview",
"ecr:GetRegistryPolicy",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:ListTagsForResource",
"ecr:PutLifecyclePolicy",
"ecr:PutRegistryPolicy",
"ecr:SetRepositoryPolicy",
"ecr:StartLifecyclePolicyPreview",
"ecr:PutImage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "test-role-attach" {
role = aws_iam_role.test-role.name
policy_arn = aws_iam_policy.test-role-policy.arn
}
Version : Terraform v0.12.31
Anyone have an idea ?
Thks
You have some space before the first { character in the JSON string here:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
It should look like this instead:
resource "aws_iam_role" "test-role" {
name = "test-role"
assume_role_policy = <<EOF
{
I personally recommend either switching to the jsonencode() method of building JSON strings, which you can see examples of in your first link, or using aws_iam_policy_document to construct your IAM policies.

How can I use a bucket as a variable within my Terraform bucket policy?

I want to create a list of S3 buckets and limit access to them to one user. That user should only have access to that bucket and no permissions to do other things in AWS.
I created my list as so (bucket names are not real in this example):
// List bucket names as a variable
variable "s3_bucket_name" {
type = "list"
default = [
"myfirstbucket",
"mysecondbucket",
...
]
}
Then I create a user.
// Create a user
resource "aws_iam_user" "aws_aim_users" {
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
path = "/"
}
I then create an access key.
// Create an access key
resource "aws_iam_access_key" "aws_iam_access_keys" {
count = "${length(var.s3_bucket_name)}"
user = "${var.s3_bucket_name[count.index]}"
// user = "${aws_iam_user.aws_aim_user.name}"
}
Now I create a user policy
// Add user policy
resource "aws_iam_user_policy" "aws_iam_user_policies" {
// user = "${aws_iam_user.aws_aim_user.name}"
count = "${length(var.s3_bucket_name)}"
name = "${var.s3_bucket_name[count.index]}"
user = "${var.s3_bucket_name[count.index]}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetLifecycleConfiguration",
...
],
"Resource": "${var.s3_bucket_name[count.index].arn}}"
}
]
}
EOF
}
Now I create my buckets with the user attached.
resource "aws_s3_bucket" "aws_s3_buckets" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
policy = <<POLICY
{
"Id": "Policy1574607242703",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574607238413",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": {
"${var.s3_bucket_name[count.index].arn}}"
"${var.s3_bucket_name[count.index].arn}/*}"
},
"Principal": {
"AWS": "${var.s3_bucket_name[count.index]}"
}
}
]
}
POLICY
tags = {
Name = "${var.s3_bucket_name[count.index]}"
Environment = "live"
}
}
The problem I have is it doesn't like where I have set the ARN in the policy by using my variable.
I also believe I need to use the user.arn not the bucket, although they should have the same name. What am I doing wrong here?
I think I see a few things that might be able to help you out.
The bucket policy resource options aren't going to use the arn of the bucket, they're looking for the actual bucket name so it would look like this "arn:aws:s3:::my-bucket".
I also see a few extra }'s in your setup there which could also be causing problems.
and,,, terraform is on version 0.12 which removes the need for the {$"resource.thing"} and replaces it with resource.thing instead. They have a helpful terraform 0.12upgrade command to run that upgrades the files which is nice. With terrafor 0.12 they adjusted how the resource creation like you have is being done. https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/

Unable to assume role and validate the specified targetGroupArn

I'd like to create and deploy a cluster using terraform ecs_service, but am unable to do so. My terraform applys always fail around IAM roles, which I don't clearly understand. Specifically, the error message is:
InvalidParametersException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
And I have found that:
When I have iam_role specified in ecs_service, ECS complains that I need to use a service-linked role.
When I have iam_role commented in ecs_service, ECS complains that the assumed role cannot validate the targetGroupArn.
My terraform spans a bunch of files. I pulled what feels like the relevant portions out below. Though I have seen a few similar problems posted, none have provided an actionable solution that solves the dilemma above, for me.
## ALB
resource "aws_alb" "frankly_internal_alb" {
name = "frankly-internal-alb"
internal = false
security_groups = ["${aws_security_group.frankly_internal_alb_sg.id}"]
subnets = ["${aws_subnet.frankly_public_subnet_a.id}", "${aws_subnet.frankly_public_subnet_b.id}"]
}
resource "aws_alb_listener" "frankly_alb_listener" {
load_balancer_arn = "${aws_alb.frankly_internal_alb.arn}"
port = "8080"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
type = "forward"
}
}
## Target Group
resource "aws_alb_target_group" "frankly_internal_target_group" {
name = "internal-target-group"
port = 8080
protocol = "HTTP"
vpc_id = "${aws_vpc.frankly_vpc.id}"
health_check {
healthy_threshold = 5
unhealthy_threshold = 2
timeout = 5
}
}
## IAM
resource "aws_iam_role" "frankly_ec2_role" {
name = "franklyec2role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "frankly_ecs_role" {
name = "frankly_ecs_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# aggresively add permissions...
resource "aws_iam_policy" "frankly_ecs_policy" {
name = "frankly_ecs_policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"ecs:*",
"ecr:*",
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"logs:*",
"tag:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "frankly_ecs_attach" {
role = "${aws_iam_role.frankly_ecs_role.name}"
policy_arn = "${aws_iam_policy.frankly_ecs_policy.arn}"
}
## ECS
resource "aws_ecs_cluster" "frankly_ec2" {
name = "frankly_ec2_cluster"
}
resource "aws_ecs_task_definition" "frankly_ecs_task" {
family = "service"
container_definitions = "${file("terraform/task-definitions/search.json")}"
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
resource "aws_ecs_service" "frankly_ecs_service" {
name = "frankly_ecs_service"
cluster = "${aws_ecs_cluster.frankly_ec2.id}"
task_definition = "${aws_ecs_task_definition.frankly_ecs_task.arn}"
desired_count = 2
iam_role = "${aws_iam_role.frankly_ecs_role.arn}"
depends_on = ["aws_iam_role.frankly_ecs_role", "aws_alb.frankly_internal_alb", "aws_alb_target_group.frankly_internal_target_group"]
# network_configuration = {
# subnets = ["${aws_subnet.frankly_private_subnet_a.id}", "${aws_subnet.frankly_private_subnet_b}"]
# security_groups = ["${aws_security_group.frankly_internal_alb_sg}", "${aws_security_group.frankly_service_sg}"]
# # assign_public_ip = true
# }
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_alb_target_group.frankly_internal_target_group.arn}"
container_name = "search-svc"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-east-1]"
}
}
I was seeing an identical error message and I was doing something else wrong:
I had specified the loadbalancer's ARN and not the loadbalancer's target_group ARN.
For me, the problem was that I forgot to attach the right policy to the service role. Attaching this AWS-managed policy helped: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole
For me, I was using output of previous command. But the output was empty hence target group arn was empty in the create service call.
I had the wrong role attached.
resource "aws_ecs_service" "ECSService" {
name = "stage-quotation"
cluster = aws_ecs_cluster.ECSCluster2.id
load_balancer {
target_group_arn = aws_lb_target_group.ElasticLoadBalancingV2TargetGroup2.arn
container_name = "stage-quotation"
container_port = 8000
}
desired_count = 1
task_definition = aws_ecs_task_definition.ECSTaskDefinition.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 100
iam_role = aws_iam_service_linked_role.IAMServiceLinkedRole4.arn #
ordered_placement_strategy {
type = "spread"
field = "instanceId"
}
health_check_grace_period_seconds = 0
scheduling_strategy = "REPLICA"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole2" {
aws_service_name = "ecs.application-autoscaling.amazonaws.com"
}
resource "aws_iam_service_linked_role" "IAMServiceLinkedRole4" {
aws_service_name = "ecs.amazonaws.com"
description = "Role to enable Amazon ECS to manage your cluster."
}
I accidentally used my role for application-autoscaling due to poor naming convention. The correct role we need to use is defined above as IAMServiceLinkedRole4.
In order to prevent the error:
Error: creating ECS Service (*****): InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
From my side is working with the following configuration:
Role Trusted relationship: Adding statement to Trusted Policy
{
"Sid": "ECSpermission",
"Effect": "Allow",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
Role Permissions:
Adding AWS manged policies:
AmazonEC2ContainerRegistryFullAccess
AmazonEC2ContainerServiceforEC2Role
Adding custom inline policy: ( I know permissions is so extensive)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:*",
"elasticloadbalancing:*",
"application-autoscaling:*",
"resource-groups:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Declare your custom role with the parameter iam_role in the resource "aws_ecs_service"
resource "aws_ecs_service" "team_deployment" {
name = local.ecs_task
cluster = data.terraform_remote_state.common_resources.outputs.ecs_cluster.id
task_definition = aws_ecs_task_definition.team_deployment.arn
launch_type = "EC2"
iam_role = "arn:aws:iam::****:role/my_custom_role"
desired_count = 3
enable_ecs_managed_tags = true
force_new_deployment = true
scheduling_strategy = "REPLICA"
wait_for_steady_state = false
load_balancer {
target_group_arn = data.terraform_remote_state.common_resources.outputs.target_group_api.arn
container_name = var.ecr_image_tag
container_port = var.ecr_image_port
}
}
Of course be careful with the parameter target_group_arn value. Must be the target group ARN. Then now is working fine!
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 2 changed, 0 destroyed.
Resolved by destroying my stack and re-deploying.

AWS CodeBuild error on DOWNLOAD_SOURCE: CLIENT_ERROR: repository not found for primary source and source version

I'm trying to create a CodeBuild project using Terraform, but when I build I'm getting the following error on the DOWNLOAD_SOURCE step:
CLIENT_ERROR: repository not found for primary source and source version
This project uses a CodeCommit repository as the source. It's odd because all of the links to the repository from the CodeCommit console GUI work fine for this build - I can see the commits, click on the link and get to the CodeCommit repo, etc so the Source setup seems to be fine. The policy used for the build has "codecommit:GitPull" permissions on the repository.
Strangely, if I go to the build in the console and uncheck the "Allow AWS CodeBuild to modify this service role so it can be used with this build project" checkbox then Update Sources, the build will work! But I can't find any way to set this from Terraform, and it will default back on if you go back to the Update Sources screen.
Here is the Terraform code I'm using to create the build.
# IAM role for CodeBuild
resource "aws_iam_role" "codebuild_myapp_build_role" {
name = "mycompany-codebuild-myapp-build-service-role"
description = "Managed by Terraform"
path = "/service-role/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
# IAM policy for the CodeBuild role
resource "aws_iam_policy" "codebuild_myapp_build_policy" {
name = "mycompany-codebuild-policy-myapp-build-us-east-1"
description = "Managed by Terraform"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"logs:CreateLogStream",
"codecommit:GitPull",
"logs:PutLogEvents",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build",
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build:*",
"arn:aws:s3:::codepipeline-us-east-1-*",
"arn:aws:codecommit:us-east-1:000000000000:mycompany-devops-us-east-1"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": [
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build",
"arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build:*"
]
}
]
}
POLICY
}
# attach the policy
resource "aws_iam_role_policy_attachment" "codebuild_myapp_build_policy_att" {
role = "${aws_iam_role.codebuild_myapp_build_role.name}"
policy_arn = "${aws_iam_policy.codebuild_myapp_build_policy.arn}"
}
# codebuild project
resource "aws_codebuild_project" "codebuild_myapp_build" {
name = "myapp-build"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild_myapp_build_role.arn}"
artifacts {
type = "NO_ARTIFACTS"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/docker:17.09.0"
type = "LINUX_CONTAINER"
privileged_mode = "true"
environment_variable {
"name" = "AWS_DEFAULT_REGION"
"value" = "us-east-1"
}
environment_variable {
"name" = "AWS_ACCOUNT_ID"
"value" = "000000000000"
}
environment_variable {
"name" = "IMAGE_REPO_NAME"
"value" = "myapp-build"
}
environment_variable {
"name" = "IMAGE_TAG"
"value" = "latest"
}
environment_variable {
"name" = "DOCKERFILE_PATH"
"value" = "docker/codebuild/myapp_build_agent"
}
}
source {
type = "CODECOMMIT"
location = "mycompany-devops-us-east-1"
git_clone_depth = "1"
buildspec = "docker/myapp/myapp_build/buildspec.yml"
}
tags {
Name = "myapp-build"
Environment = "${var.env_name}"
Region = "${var.aws_region}"
ResourceType = "CodeBuild Project"
ManagedBy = "Terraform"
}
}
Your problem is the specification of the source:
source {
type = "CODECOMMIT"
location = "mycompany-devops-us-east-1"
Here's the Amazon documentation for the source, what's relevant with some emphasis:
For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, https://git-codecommit.region-ID.amazonaws.com/v1/repos/repo-name ).
In your case, that is probably something like this, using the 'clone url' found in the codecommit console:
https://git-codecommit.us-east-1.amazonaws.com/v1/repos/mycompany-devops-us-east-1
I ran into this while using a private github repository source. In my case I gave the URL, not the clone link to github, so the problem was very similar:
bad: https://github.com/privaterepo/reponame
good: https://github.com/privaterepo/reponame.git

Resources