Unsupported argument, An argument named "" is not expected here - terraform

I am getting the below error when run terraform plan, the idea of this IAM to allow Lambda to run another services in AWS (step-function) once the it will finish executing.
Why does terraform fail with "An argument named "" is not expected here"?
Terraform version
Terraform v0.12.31
The error
Error: Unsupported argument
on iam.tf line 246, in resource "aws_iam_role" "lambda_role":
246: managed_policy_arns = var.managed_policy_arns
An argument named "managed_policy_arns" is not expected here.
Error: Unsupported block type
on iam.tf line 260, in resource "aws_iam_role" "lambda_role":
260: inline_policy {
Blocks of type "inline_policy" are not expected here.
the code for iam.tf:-
resource "aws_iam_role" "lambda_role" {
name = "${var.name}-role"
managed_policy_arns = var.managed_policy_arns
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
inline_policy {
name = "step_function_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement: [
{
Effect: "Allow"
Action: ["states:StartExecution"]
Resource: "*"
}
]
})
}
}

For the future, I fixed this issue by using a higher version of aws provider
The provider.tf was like the following :-
provider "aws" {
region = var.region
version = "< 3.0"
}
Change it to be like this :-
provider "aws" {
region = var.region
version = "<= 3.37.0"
}

Related

Terraform with ECS :Invalid arn syntax

i receive this error when running terraform apply ( i deploy a container using ecs task which connect to rds with terraform )
Error: creating ECS Task Definition (project_task): ClientException: Invalid arn syntax.
│
│ with module.ecs.aws_ecs_task_definition.project_task,
│ on modules/ecs/main.tf line 37, in resource "aws_ecs_task_definition" "project_task":
│ 37: resource "aws_ecs_task_definition" "project_task" {
│
as seen from the main.tf i declared the execution rule
data "aws_ecr_repository" "project_ecr_repo" {
name = "project-ecr-repo"
}
resource "aws_ecs_cluster" "project_cluster" {
name = "project-cluster"
}
data "aws_iam_policy_document" "ecs_task_execution_role" {
version = "2012-10-17"
statement {
sid = ""
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
# ECS task execution role
resource "aws_iam_role" "ecs_task_execution_role" {
name = "ecs_task_execution_role"
assume_role_policy = "${data.aws_iam_policy_document.ecs_task_execution_role.json}"
}
# ECS task execution role policy attachment
resource "aws_iam_role_policy_attachment" "ecs_task_execution_role" {
role = "${aws_iam_role.ecs_task_execution_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_ecs_task_definition" "project_task" {
family = "project_task"
container_definitions = file("container_def.json")
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
memory = 512
cpu = 256
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
}
resource "aws_ecs_service" "project_service" {
name = "project-service"
cluster = aws_ecs_cluster.project_cluster.id
task_definition = aws_ecs_task_definition.project_task.arn
launch_type = "FARGATE"
desired_count = 2
network_configuration {
subnets = var.vpc.public_subnets
assign_public_ip = true
}
}
and here is my container definition file
[
{
"name": "backend_feed",
"image": "639483503131.dkr.ecr.us-east-1.amazonaws.com/backend-feed:latest",
"cpu": 256,
"memory": 512,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080,
"protocol": "tcp"
}
],
"environmentFiles": [
{
"value": "https://myawsbucket-639483503131.s3.amazonaws.com/env_vars.json",
"type": "s3"
}
]
}
]
appreciate your help
Thank you
terrafrom apply -auto-approve
expected to create ecs task with the provided container specs
Your environmentFiles value is a web URL, while ECS expects an S3 object ARN. Also, the documentation says the environment file must have a .env extension.
So first you need to rename env_vars.json to env_vars.env, and the file can't be JSON format, it has to be in the format of one VARIABLE=VALUE per line.
Then you need to specify the environmentFiles value property as an ARN:
"value": "arn:aws:s3:::myawsbucket-639483503131/env_vars.env"

While creating Azure App service via terraform throwing an error An argument named "zone_redundant" is not expected here

I'm trying to create a zone redundant azure app service for high availability, but terraform validate throwing an error An argument named "zone_redundant" is not expected here.
My configuration looks like below
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
resource "azurerm_app_service_plan" "example" {
name = "app-demo"
location = "Australia East"
resource_group_name = "rg-app-service"
kind = "Linux"
reserved = true
zone_redundant = true
sku {
tier = "PremiumV2"
size = "P1v2"
capacity = "3"
}
}
I'm not sure what I'm missing here. Can anyone please advise me on this ?
Reference
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service_plan#zone_redundant
You are using Terraform azurerm provider version 2.46.0
zone_redundant option in azurerm_app_service_plan Terraform resources was added in Terraform azurerm provider version 2.74.0, that's why you are getting error "An argument named "zone_redundant" is not expected here."
Please update Terraform azurerm provider version in your code:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.74.0"
}
}
}

{ClientError}An error occurred (ValidationException) when calling the RunJobFlow operation: Invalid InstanceProfile

I deployed using Terraform an IAM Role to be used in EMR:
data "aws_iam_policy_document" "emr_assume_role" {
statement {
sid = "EMRAssume"
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"elasticmapreduce.amazonaws.com"
]
}
}
}
resource "aws_iam_role" "my_emr_ec2_instance_role" {
name = "my_emr_ec2_instance_role"
assume_role_policy = data.aws_iam_policy_document.emr_assume_role.json
}
resource "aws_iam_policy" "emr_ec2_instances_policy" {
name = "emr_ec2_instances_policy"
policy = file("${path.module}/my/path/my_emr_instance_role_policy.json")
}
resource "aws_iam_role_policy_attachment" "policy_attachment" {
role = aws_iam_role.my_emr_ec2_instance_role.name
policy_arn = aws_iam_policy.emr_ec2_instances_policy.arn
}
Then when I try to run run_job_flow() method from boto3 like this:
client.run_job_flow(
Name="EMR",
LogUri=logs_uri,
ReleaseLabel='emr-6.2.0',
Instances=instances,
VisibleToAllUsers=True,
Steps=steps,
BootstrapActions=ba,
Applications=[{'Name': 'Spark'}],
ServiceRole='my_service_role_emr',
JobFlowRole='my_emr_ec2_instance_role',
Tags=tags)
But I straight-away receive the following error message:
{ClientError}An error occurred (ValidationException) when calling the RunJobFlow operation: Invalid InstanceProfile my_emr_ec2_instance_role
How to resolve?
I'm sharing my experience hoping to help someone else, please share yours if different.
In my case a first mistake was with the identifiers field, which should have had "ec2.amazonaws.com" as value, so the aws_iam_policy_document block would get:
data "aws_iam_policy_document" "emr_assume_role" {
statement {
sid = "EMRAssume"
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com"
]
}
}
}
Another issue is relative to the Instance Profile which would have been automatically created if the Role would have been generated from the AWS Console, but Terraform doesn't provide it automatically. So in Terraform this block of code should fix the problem:
resource "aws_iam_instance_profile" "emr_ec2_instance_profile" {
name = aws_iam_role.my_emr_ec2_instance_role.name
role = aws_iam_role.my_emr_ec2_instance_role.name
}

Terraform Multiple Provider issue

I am following the documentation here running Terraform v0.14.10 -> https://www.terraform.io/docs/language/modules/develop/providers.html
My config is as follows:
variables.tf
terraform {
backend "remote" {
organization = "the-xxxx"
workspaces {
prefix = "non-prod-"
}
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.32.0"
}
}
}
provider "aws" {
}
provider "aws" {
alias = "core_db_middleware"
profile = "core_db_middleware"
}
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
in main.tf where I call my module:
module "vpc" {
source = "../modules/infrastructure/shared/aws/vpc"
providers = {
aws.core_db_middleware = aws.core_db_middleware
}
class_b_block = var.class_b_block
platform_name = var.platform_name
core_platform_aws_account_id = var.core_platform_aws_account_id
core_platform_vpc_id = var.core_platform_vpc_id
core_platform_region = var.core_platform_region
core_platform_cidr_range = var.core_platform_cidr_range
}
Then in the "vpc" module variables.tf I have:
provider "aws" {}
provider "aws" {
alias = "core_db_middleware"
}
data "aws_caller_identity" "requester" {
provider = aws.core_db_middleware
}
data "aws_region" "requester" {
provider = aws.core_db_middleware
}
When I run plan on terraform it keeps giving me the aws account ID of the default provider even though I have:
data "aws_caller_identity" "requester" {
provider = aws.core_db_middleware
}
and use it like so:
resource "aws_vpc_peering_connection" "core_db_middleware_requester" {
peer_owner_id = data.aws_caller_identity.requester.account_id
peer_vpc_id = "vpc-xxxxxxxxxxxxxx"
vpc_id = aws_vpc.main.id
peer_region = data.aws_region.requester.name
auto_accept = false
tags = {
Name = "VPC Peering between ${var.platform_name} and core_db_middleware"
}
}
I tried adding the configuration_aliases = [ aws.core_db_middleware ] to my main.tf in my root directory as described in the official documentation and in the "vpc" module directory but I get the below error when in my root main:
Error: Invalid required_providers object
on variables.tf line 9, in terraform:
9: aws = {
10: source = "hashicorp/aws"
11: version = "3.32.0"
12: configuration_aliases = [ aws.core_db_middleware ]
13: }
required_providers objects can only contain "version" and "source" attributes.
To configure a provider, use a "provider" block.
Error: Variables not allowed
on variables.tf line 12, in terraform:
12: configuration_aliases = [ aws.core_db_middleware ]
Variables may not be used here.
and the below error when I place it in the "vpc" module:
Error: Variables not allowed
On ../modules/infrastructure/shared/aws/vpc/variables.tf line 33: Variables
may not be used here.
Error: Variables not allowed
On ../modules/infrastructure/shared/aws/vpc/variables.tf line 33: Variables
may not be used here.
Error: Variables not allowed
On ../modules/infrastructure/shared/aws/vpc/variables.tf line 33: Variables
may not be used here.
Error: Variables not allowed
On ../modules/infrastructure/shared/aws/vpc/variables.tf line 33: Variables
may not be used here.
I cannot figure out where I am going wrong :( I do also have my default provider aws environment variables set in the pipeline environment e.g
AWS_ACCESS_KEY_ID: $xxxxxxxxxxx_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $xxxxxxxxxx_AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: 'ap-southeast-2'

Terraform: Interpolation could be replaced by HCL2 expression

When I try to use interpolation syntax like this:
vpc_id = "${aws_vpc.prod-vpc.id}"
I get the suggestion in IntelliJ that "Interpolation could be replaced by HCL2 expression", so if I change the line into this:
vpc_id = "aws_vpc.prod-vpc.id"
and issue terraform apply, I get:
C:\tf_ptojects\aws\subnet>terraform apply -auto-approve
aws_subnet.prod-subnet: Creating...
aws_vpc.prod-vpc: Creating...
aws_vpc.prod-vpc: Creation complete after 2s [id=vpc-0cfb27255522bdf15]
Error: error creating subnet: InvalidVpcID.NotFound: The vpc ID 'aws_vpc.prod-vpc.id' does not exist
status code: 400, request id: dab3fb03-424d-4bf2-ace6-bef93a94ee9c
If I re-apply interpolation syntax and run terraform apply again, then the resources get deployed but I get the warning in Terraform, saying that interpolation-only expressions are deprecated:
Warning: Interpolation-only expressions are deprecated
on main.tf line 16, in resource "aws_subnet" "prod-subnet":
16: vpc_id = "${aws_vpc.prod-vpc.id}"
So TF is discouraging the use of interpolation syntax, yet issues an error if it's not used. Is it some kind of bug or something?
C:\tf_ptojects\aws\subnet>terraform -version
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/aws v3.25.0
Entire TF code for reference:
provider "aws" {
region = "eu-central-1"
}
resource "aws_vpc" "prod-vpc" {
cidr_block = "10.100.0.0/16"
tags = {
name = "production vpc"
}
}
resource "aws_subnet" "prod-subnet" {
cidr_block = "10.100.1.0/24"
vpc_id = "aws_vpc.prod-vpc.id"
tags = {
name = "prod-subnet"
}
}
You just have to get the id without using double quotes vpc_id = aws_vpc.prod-vpc.id because you are getting vpc id from the resource.
If you use the double quotes, it will be considered as a string and no evaluation will be done, and terraform will consider "aws_vpc.prod-vpc.id" as the id.
This is the corrected code:
provider "aws" {
region = "eu-central-1"
}
resource "aws_vpc" "prod-vpc" {
cidr_block = "10.100.0.0/16"
tags = {
name = "production vpc"
}
}
resource "aws_subnet" "prod-subnet" {
cidr_block = "10.100.1.0/24"
vpc_id = aws_vpc.prod-vpc.id
tags = {
name = "prod-subnet"
}
}
I had tested the above code snippet and it is working perfectly fine.

Resources