How to include a policy json file in Terraform? - terraform

Downloaded this iam policy file and save it in the root path besides main.tf in Terraform:
https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.1/docs/install/iam_policy.json
Made this creation want to call the policy file
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy"
policy = file("iam-policy.json")
}
The tflint got this error:
15:36:27 server.go:418: rpc: gob error encoding body: gob: type not registered for interface: tfdiags.diagnosticsAsError
Failed to check ruleset. An error occurred:
Error: Failed to check `aws_iam_policy_invalid_policy` rule: reading body EOF
I also tried this way, the same result:
policy = jsondecode(file("iam-policy.json"))

Did you use the latest version of tflint?
Because I've tried and everything was OK for me
There were my steps:
NOTE: tflint v0.31.0 and terraform v1.0.2
[1] wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.1/docs/install/iam_policy.json
[2] In my main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy"
policy = file("iam_policy.json")
}
[3] Run terraform plan
[4] Have gotten
Terraform will perform the following actions:
# aws_iam_policy.worker_policy will be created + resource "aws_iam_policy" "worker_policy" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "worker-policy"
+ path = "/"
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "iam:CreateServiceLinkedRole",
+ "ec2:DescribeAccountAttributes",
+ "ec2:DescribeAddresses",
...
[5] Run tflint
~/Work/Other/test ❯ tflint --init
Plugin `aws` is already installed
~/Work/Other/test ❯ tflint
~/Work/Other/test ❯

Related

I am getting an invalid ARN from Terraform

I have an issue where I am not getting the instance-profile in the ARN path. Code snippet:
resource "aws_launch_template" "launch-template" {
image_id = data.aws_ami.ecs.id
instance_type = "c5.large"
iam_instance_profile {
arn = aws_iam_role.ecsInstanceRole.arn
}
}
resource "aws_iam_role" "ecsInstanceRole" {
name = "assess-instance-role"
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}
I get the following error:
Error: error creating EC2 Launch Template (lt-12344444444444) Version: InvalidIamInstanceProfileArn.Malformed: The ARN ‘arn:aws:iam::1234444444444:role/assess-instance-role’ is not valid. The expected format is arn:aws:iam:::instance-profile/ (this is followed by < instance-profile-name > but the formatting it not letting me write it.
I am on the following version:
Terraform v1.2.3
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v3.75.2
As Jordanm pointed out in the comment, you can't attach a role to an ec2, you must create an instance profile from the role: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile

Unsupported argument, An argument named "" is not expected here

I am getting the below error when run terraform plan, the idea of this IAM to allow Lambda to run another services in AWS (step-function) once the it will finish executing.
Why does terraform fail with "An argument named "" is not expected here"?
Terraform version
Terraform v0.12.31
The error
Error: Unsupported argument
on iam.tf line 246, in resource "aws_iam_role" "lambda_role":
246: managed_policy_arns = var.managed_policy_arns
An argument named "managed_policy_arns" is not expected here.
Error: Unsupported block type
on iam.tf line 260, in resource "aws_iam_role" "lambda_role":
260: inline_policy {
Blocks of type "inline_policy" are not expected here.
the code for iam.tf:-
resource "aws_iam_role" "lambda_role" {
name = "${var.name}-role"
managed_policy_arns = var.managed_policy_arns
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
]
})
inline_policy {
name = "step_function_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement: [
{
Effect: "Allow"
Action: ["states:StartExecution"]
Resource: "*"
}
]
})
}
}
For the future, I fixed this issue by using a higher version of aws provider
The provider.tf was like the following :-
provider "aws" {
region = var.region
version = "< 3.0"
}
Change it to be like this :-
provider "aws" {
region = var.region
version = "<= 3.37.0"
}

Unable to create azurerm_cosmosdb_sql_container with indexing_policy block

According to the documentation on Terraform.io for azurerm_cosmosdb_sql_container, it says I can include an indexing_policy block. However, when I run terraform plan I get errors:
Error: Unsupported block type
on main.tf line 912, in resource "azurerm_cosmosdb_sql_container"
"AccountActivity": 912: indexing_policy {
Blocks of type "indexing_policy" are not expected here.
main.tf
resource "azurerm_cosmosdb_sql_container" "AccountActivity" {
name = "AccountActivity"
resource_group_name = azurerm_resource_group.backendResourceGroup.name
account_name = azurerm_cosmosdb_account.AzureCosmosAccount.name
database_name = azurerm_cosmosdb_sql_database.AzureCosmosDbCache.name
default_ttl = 2592000
throughput = 2500
indexing_policy {
indexing_mode = "Consistent"
included_path {
path = "/*"
}
excluded_path {
path = "/\"_etag\"/?"
}
}
}
Here is my terraform version output:
terraform version
Terraform v0.13.4
+ provider registry.terraform.io/-/azurerm v2.30.0
+ provider registry.terraform.io/hashicorp/azurerm v2.20.0
+ provider registry.terraform.io/hashicorp/random v2.3.0
After searching GitHub, I finally found that support for the indexing_policy block was added in this commit 26 days ago. The documentation doesn't mention this, nor does the release notes for azurerm v2.31.1. After updating my main.tf file with the latest version for azurerm and running terraform init the terraform plan command worked without issue.
provider "azurerm" {
version = "~>2.31.1"
features {}
}

terraform apply could not find the resource helm_release

I am trying to setup helm and helm releases through terraform, as per terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# helm_release.prometheus_vsi will be created
+ resource "helm_release" "prometheus_vsi" {
+ chart = "stable/prometheus"
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "prometheus-vsi"
+ namespace = "prometheus"
+ recreate_pods = false
+ repository = "stable"
+ reuse = false
+ reuse_values = false
+ status = "DEPLOYED"
+ timeout = 300
+ values = [
+ <<~EOT
rbac:
create: true
enabled: false
EOT,
]
+ verify = false
+ version = "10.2.0"
+ wait = true
}
Plan: 1 to add, 0 to change, 0 to destroy.
but when I run terraform apply its throw error mentioned in "Panic Output".
Terraform Version
Terraform v0.12.18
+ provider.aws v2.43.0
+ provider.helm v0.10.4
+ provider.kubernetes v1.10.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.template v2.1.2
Your version of Terraform is out of date! The latest version
is 0.12.19. You can update by downloading from https://www.terraform.io/downloads.html
Affected Resource(s)
helm_release
Terraform Configuration Files
provider "helm" {
version = "~> 0.10"
install_tiller = true
service_account = local.helm_service_account_name
debug = true
kubernetes {
config_path = "${path.module}/kubeconfig_${module.eks.kubeconfig}"
}
}
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "prometheus_vsi" {
name = "prometheus-vsi"
repository = data.helm_repository.stable.metadata[0].name
chart = "stable/prometheus"
namespace = local.prometheus_ns
version = "10.2.0"
values = [
"${file("${local.chart_root}/prometheus/prometheus-values.yaml")}"
]
}
Debug Output
I have enable the debug=true but its not producing helm particular logs
Panic Output
Error: error installing: the server could not find the requested resource (post deployments.apps)
on main.tf line 205, in resource "helm_release" "prometheus_vsi":
205: resource "helm_release" "prometheus_vsi" {
Expected Behavior
As per terraform plan it should create helm_release in kubernetes.
Actual Behavior
Terraform apply throwing error.
Steps to Reproduce
terraform apply
Thanks.
Stable repo is deprecated and all the charts were removed on November 2020.
Try the chart: prometheus-community/kube-prometheus-stack
URL: https://prometheus-community.github.io/helm-charts

Terraform with vSphere: the operation is not supported on the object (resource pool)

I have a Terraform file to create a resource pool on my home vSphere instance. The Terraform file looks as follows:
provider "vsphere" {
vsphere_server = "${var.vsphere_server}"
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
allow_unverified_ssl = true
}
data "vsphere_datacenter" "dc" {
name = "Datacenter1"
}
data "vsphere_compute_cluster" "compute_cluster" {
name = "Cluster1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
resource "vsphere_resource_pool" "resource_pool" {
name = "terraform-resource-pool-test"
parent_resource_pool_id = "${data.vsphere_compute_cluster.compute_cluster.resource_pool_id}"
}
The output from terraform plan is the following:
# vsphere_resource_pool.resource_pool will be created
+ resource "vsphere_resource_pool" "resource_pool" {
+ cpu_expandable = true
+ cpu_limit = -1
+ cpu_reservation = 0
+ cpu_share_level = "normal"
+ cpu_shares = (known after apply)
+ id = (known after apply)
+ memory_expandable = true
+ memory_limit = -1
+ memory_reservation = 0
+ memory_share_level = "normal"
+ memory_shares = (known after apply)
+ name = "terraform-resource-pool-test"
+ parent_resource_pool_id = "resgroup-8"
}
Plan: 1 to add, 0 to change, 0 to destroy.
But I always get back the following error:
vsphere_resource_pool.resource_pool: Creating...
Error: ServerFaultCode: The operation is not supported on the object.
on main.tf line 34, in resource "vsphere_resource_pool"
"resource_pool": 34: resource "vsphere_resource_pool"
"resource_pool" {
Any idea on how to solve this? I'm using vSphere Version 6.0.0 Build 3617395
Code looks fine.
Some manual fix will be helpful for this case.
Since it is your own system, it should be fine to clean the tfstate files, otherwise, backup them first.
clean the environment
# clean below folder and files from current directory, where you run `terraform apply`
rm -rf .terraform
rm terraform.tfstate* in any subfolders
# clean below folder from home directory.
rm ~/.terraform.d/
deploy again.
terraform init
terraform plan
terraform apply

Resources