terraform apply could not find the resource helm_release - terraform

I am trying to setup helm and helm releases through terraform, as per terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# helm_release.prometheus_vsi will be created
+ resource "helm_release" "prometheus_vsi" {
+ chart = "stable/prometheus"
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "prometheus-vsi"
+ namespace = "prometheus"
+ recreate_pods = false
+ repository = "stable"
+ reuse = false
+ reuse_values = false
+ status = "DEPLOYED"
+ timeout = 300
+ values = [
+ <<~EOT
rbac:
create: true
enabled: false
EOT,
]
+ verify = false
+ version = "10.2.0"
+ wait = true
}
Plan: 1 to add, 0 to change, 0 to destroy.
but when I run terraform apply its throw error mentioned in "Panic Output".
Terraform Version
Terraform v0.12.18
+ provider.aws v2.43.0
+ provider.helm v0.10.4
+ provider.kubernetes v1.10.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.template v2.1.2
Your version of Terraform is out of date! The latest version
is 0.12.19. You can update by downloading from https://www.terraform.io/downloads.html
Affected Resource(s)
helm_release
Terraform Configuration Files
provider "helm" {
version = "~> 0.10"
install_tiller = true
service_account = local.helm_service_account_name
debug = true
kubernetes {
config_path = "${path.module}/kubeconfig_${module.eks.kubeconfig}"
}
}
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "prometheus_vsi" {
name = "prometheus-vsi"
repository = data.helm_repository.stable.metadata[0].name
chart = "stable/prometheus"
namespace = local.prometheus_ns
version = "10.2.0"
values = [
"${file("${local.chart_root}/prometheus/prometheus-values.yaml")}"
]
}
Debug Output
I have enable the debug=true but its not producing helm particular logs
Panic Output
Error: error installing: the server could not find the requested resource (post deployments.apps)
on main.tf line 205, in resource "helm_release" "prometheus_vsi":
205: resource "helm_release" "prometheus_vsi" {
Expected Behavior
As per terraform plan it should create helm_release in kubernetes.
Actual Behavior
Terraform apply throwing error.
Steps to Reproduce
terraform apply
Thanks.

Stable repo is deprecated and all the charts were removed on November 2020.
Try the chart: prometheus-community/kube-prometheus-stack
URL: https://prometheus-community.github.io/helm-charts

Related

How to use terraform depends_on to dictate ordering of resource creation?

I have the following terraform resources in a file
resource "google_project_service" "cloud_resource_manager" {
project = var.tf_project_id
service = "cloudresourcemanager.googleapis.com"
disable_dependent_services = true
}
resource "google_project_service" "artifact_registry" {
project = var.tf_project_id
service = "artifactregistry.googleapis.com"
disable_dependent_services = true
depends_on = [google_project_service.cloud_resource_manager]
}
resource "google_artifact_registry_repository" "el" {
provider = google-beta
project = var.tf_project_id
location = var.region
repository_id = "el"
description = "Repository for extract/load docker images"
format = "DOCKER"
depends_on = [google_project_service.artifact_registry]
}
However, when I run terraform plan, I get this
Terraform will perform the following actions:
# google_artifact_registry_repository.el will be created
+ resource "google_artifact_registry_repository" "el" {
+ create_time = (known after apply)
+ description = "Repository for extract/load docker images"
+ format = "DOCKER"
+ id = (known after apply)
+ location = "us-central1"
+ name = (known after apply)
+ project = "backbone-third-party-data"
+ repository_id = "el"
+ update_time = (known after apply)
}
# google_project_iam_member.ingest_sa_roles["cloudscheduler.serviceAgent"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = "backbone-third-party-data"
+ role = "roles/cloudscheduler.serviceAgent"
}
# google_project_iam_member.ingest_sa_roles["run.invoker"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = <my project id>
+ role = "roles/run.invoker"
}
# google_project_service.artifact_registry will be created
+ resource "google_project_service" "artifact_registry" {
+ disable_dependent_services = true
+ disable_on_destroy = true
+ id = (known after apply)
+ project = <my project id>
+ service = "artifactregistry.googleapis.com"
}
See how google_project_service.artifact_registry is created after google_artifact_registry_repository.el. I was hoping that my depends_on in resource google_artifact_registry_repository.el would make it so the service was created first. Am I misunderstanding how depends_on works? Or does the ordering of resources listed from terraform plan not actually mean that thats the order they are created in?
Edit: when I run terraform apply it errors out with
Error 403: Cloud Resource Manager API has not been used in project 521986354168 before or it is disabled
Even though it is enabled. I think it's doing this because its running the artifact registry resource creation before creating the terraform services?
I don't think that it will be possible to enable this particular API this way, as google_project_service resource depends on Resource Manager API (and maybe also on Service Usage API?) being enabled. So you could either enable those manually or use null_resource with local-exec provisioner to do it automatically:
resource "null_resource" "enable_cloudresourcesmanager_api" {
provisioner "local-exec" {
command = "gcloud services enable cloudresourcesmanager.googleapis.com cloudresourcemanager.googleapis.com --project ${var.project_id}"
}
}
Another issue you might find is that enabling an API takes some time, depending on a service. So sometimes even though your resources depend on a resource enabling an API, you will still get the same error message. Then you can just reapply your configuration and as an API had time to initialize, second apply will work. In some cases this is good enough, but if you are building a reusable module you might want to avoid those reapplies. Then you can use time_sleep resources to wait for API initialization:
resource "time_sleep" "wait_for_cloudresourcemanager_api" {
depends_on = [null_resource.enable_cloudresourcesmanager_api]
# or: depends_on = [google_project_service.some_other_api]
create_duration = "30s"
}

How to include a policy json file in Terraform?

Downloaded this iam policy file and save it in the root path besides main.tf in Terraform:
https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.1/docs/install/iam_policy.json
Made this creation want to call the policy file
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy"
policy = file("iam-policy.json")
}
The tflint got this error:
15:36:27 server.go:418: rpc: gob error encoding body: gob: type not registered for interface: tfdiags.diagnosticsAsError
Failed to check ruleset. An error occurred:
Error: Failed to check `aws_iam_policy_invalid_policy` rule: reading body EOF
I also tried this way, the same result:
policy = jsondecode(file("iam-policy.json"))
Did you use the latest version of tflint?
Because I've tried and everything was OK for me
There were my steps:
NOTE: tflint v0.31.0 and terraform v1.0.2
[1] wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.1/docs/install/iam_policy.json
[2] In my main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy"
policy = file("iam_policy.json")
}
[3] Run terraform plan
[4] Have gotten
Terraform will perform the following actions:
# aws_iam_policy.worker_policy will be created + resource "aws_iam_policy" "worker_policy" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "worker-policy"
+ path = "/"
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "iam:CreateServiceLinkedRole",
+ "ec2:DescribeAccountAttributes",
+ "ec2:DescribeAddresses",
...
[5] Run tflint
~/Work/Other/test ❯ tflint --init
Plugin `aws` is already installed
~/Work/Other/test ❯ tflint
~/Work/Other/test ❯

Terraform-How to configure lifecycle policy for existing storage account

I have a storage account created in azure portal(out side of terraform). I want to configure lifecycle management policy to delete older blob. I have tried terraform import to import the resource(storage account), but seems settings are different terraform plan, when I run terraform plan it say, it will replace or create storage account.
But I dont want to recreate the storage account which has some date in it.
provider "azurerm" {
features {}
skip_provider_registration = "true"
}
variable "LOCATION" {
default = "northeurope"
description = "Region to deploy into"
}
variable "RESOURCE_GROUP" {
default = "[RETRACTED]" # The value is same in azure portal
description = "Name of the resource group"
}
variable "STORAGE_ACCOUNT" {
default = "[RETRACTED]" # The value is same in azure portal
description = "Name of the storage account where to store the backup"
}
variable "STORAGE_ACCOUNT_RETENTION_DAYS" {
default = "180"
description = "Number of days to keep the backups"
}
resource "azurerm_resource_group" "storage-account" {
name = var.RESOURCE_GROUP
location = var.LOCATION
}
resource "azurerm_storage_account" "storage-account-lifecycle" {
name = var.STORAGE_ACCOUNT
location = azurerm_resource_group.storage-account.location
resource_group_name = azurerm_resource_group.storage-account.name
account_tier = "Standard"
account_replication_type = "RAGRS" #Read-access geo-redundant storage
}
resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
storage_account_id = azurerm_storage_account.storage-account-lifecycle.id
rule {
name = "DeleteOldBackups"
enabled = true
filters {
blob_types = ["blockBlob"]
}
actions {
base_blob {
delete_after_days_since_modification_greater_than = var.STORAGE_ACCOUNT_RETENTION_DAYS
}
}
}
}
Import resource
$ terraform import azurerm_storage_account.storage-account-lifecycle /subscriptions/[RETRACTED]
azurerm_storage_account.storage-account-lifecycle: Importing from ID "/subscriptions/[RETRACTED]...
azurerm_storage_account.storage-account-lifecycle: Import prepared!
Prepared azurerm_storage_account for import
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
The plan is below
$ terraform plan
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# azurerm_resource_group.storage-account will be created
+ resource "azurerm_resource_group" "storage-account" {
+ id = (known after apply)
+ location = "northeurope"
+ name = "[RETRACTED]"
}
# azurerm_storage_management_policy.storage-account-lifecycle-management-policy will be created
+ resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
+ id = (known after apply)
+ storage_account_id = "/subscriptions/[RETRACTED]"
+ rule {
+ enabled = true
+ name = "DeleteOldBackups"
+ actions {
+ base_blob {
+ delete_after_days_since_modification_greater_than = 180
}
}
+ filters {
+ blob_types = [
+ "blockBlob",
]
}
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform
apply" now.
From the plan, I see it will create "storage account". I also tried removing azurerm_storage_account section and specified resource id for the var storage_account_id in azurerm_storage_management_policy section, but still it is saying # azurerm_resource_group.storage-account will be created.
How to configure lifecycle management policy without modifying/creating existing storage account.
PS: This is my first terraform script
Ok, I see the problem as #Jim Xu pointed in the comment. I didn't import resource group which is what it is saying. I imported resource group like and ran terraform plan
$ terraform import azurerm_resource_group.storage-account /subscriptions/[RETRACTED]
$ $ terraform plan
azurerm_resource_group.storage-account: Refreshing state... [id=/subscriptions/[RETRACTED]]
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# azurerm_storage_management_policy.storage-account-lifecycle-management-policy will be created
+ resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
+ id = (known after apply)
+ storage_account_id = "/subscriptions/[RETRACTED]"
+ rule {
+ enabled = true
+ name = "DeleteOldBackups"
+ actions {
+ base_blob {
+ delete_after_days_since_modification_greater_than = 180
}
}
+ filters {
+ blob_types = [
+ "blockBlob",
]
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.

Terraform keeps destroying existing resource

Im trying to debug why my Terraform script is not working. Due an unknown reason Terraform keeps destroying my MySQL database and recreates it after that.
Below is the output of the execution plan:
# azurerm_mysql_server.test01 will be destroyed
- resource "azurerm_mysql_server" "test01" {
- administrator_login = "me" -> null
- auto_grow_enabled = true -> null
- backup_retention_days = 7 -> null
- create_mode = "Default" -> null
- fqdn = "db-test01.mysql.database.azure.com" -> null
- geo_redundant_backup_enabled = false -> null
- id = "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01" -> null
- infrastructure_encryption_enabled = false -> null
- location = "westeurope" -> null
- name = "db-test01" -> null
- public_network_access_enabled = true -> null
- resource_group_name = "production-rg" -> null
- sku_name = "B_Gen5_1" -> null
- ssl_enforcement = "Disabled" -> null
- ssl_enforcement_enabled = false -> null
- ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled" -> null
- storage_mb = 51200 -> null
- tags = {} -> null
- version = "8.0" -> null
- storage_profile {
- auto_grow = "Enabled" -> null
- backup_retention_days = 7 -> null
- geo_redundant_backup = "Disabled" -> null
- storage_mb = 51200 -> null
}
- timeouts {}
}
# module.databases.module.test.azurerm_mysql_server.test01 will be created
+ resource "azurerm_mysql_server" "test01" {
+ administrator_login = "me"
+ administrator_login_password = (sensitive value)
+ auto_grow_enabled = true
+ backup_retention_days = 7
+ create_mode = "Default"
+ fqdn = (known after apply)
+ geo_redundant_backup_enabled = false
+ id = (known after apply)
+ infrastructure_encryption_enabled = false
+ location = "westeurope"
+ name = "db-test01"
+ public_network_access_enabled = true
+ resource_group_name = "production-rg"
+ sku_name = "B_Gen5_1"
+ ssl_enforcement = (known after apply)
+ ssl_enforcement_enabled = false
+ ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled"
+ storage_mb = 51200
+ version = "8.0"
+ storage_profile {
+ auto_grow = (known after apply)
+ backup_retention_days = (known after apply)
+ geo_redundant_backup = (known after apply)
+ storage_mb = (known after apply)
}
}
As far as i know all is exactly the same. To prevent this i also did a manually terraform import to sync the state with the remote state.
The actually resource as defined in my main.tf
resource "azurerm_mysql_server" "test01" {
name = "db-test01"
location = "West Europe"
resource_group_name = var.rg
administrator_login = "me"
administrator_login_password = var.root_password
sku_name = "B_Gen5_1"
storage_mb = 51200
version = "8.0"
auto_grow_enabled = true
backup_retention_days = 7
geo_redundant_backup_enabled = false
infrastructure_encryption_enabled = false
public_network_access_enabled = true
ssl_enforcement_enabled = false
}
The other odd thing is that below command will output that all is actually in sync?
➜ terraform git:(develop) ✗ terraform plan --refresh-only
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/firstklas-production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
No changes. Your infrastructure still matches the configuration.
After an actual import the same still happens even though the import states all is in state:
➜ terraform git:(develop) ✗ terraform import azurerm_mysql_server.test01 /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
azurerm_mysql_server.test01: Importing from ID "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01"...
azurerm_mysql_server.test01: Import prepared!
Prepared azurerm_mysql_server for import
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
What can i do to prevent this destroy? Or even figure out why the actually destroy is triggered? This is happening on multiple azure instances at this point.
NOTE: the subscription ID is spoofed so don't worry
Best,
Pim
Your plan output shows that Terraform is seeing two different resource addresses:
# azurerm_mysql_server.test01 will be destroyed
# module.databases.module.test.azurerm_mysql_server.test01 will be created
Notice that the one to be created is in a nested module, not in the root module.
If your intent is to import this object to the address that is shown as needing to be created above, you'll need to specify this full address in the terraform import command:
terraform import 'module.databases.module.test.azurerm_mysql_server.test01' /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
The terraform import command tells Terraform to bind an existing remote object to a particular Terraform address, and so when you use it you need to be careful to specify the correct Terraform address to bind to.
In your case, you told Terraform to bind the object to a hypothetical resource "azurerm_mysql_server" "test01" block in the root module, but your configuration has no such block and so when you ran terraform plan Terraform assumed that you wanted to delete that object, because deleting a resource block is how we typically tell Terraform that we intend to delete something.
There is a way.
user
Plan
Apply
terraform state rm "resource_name" --------This will eliminate or remove resource from current state
next Apply
Worked perfectly on GCP for creating 2 successive VM using same TF script.
Only thing is we need to write/code to get current resources and store somewhere and create commands in config: require blocks for upstream dependencies #3. While destroying we can add back using terraform state mv "resource_name"
Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states)
Hope this helps.

Terraform with vSphere: the operation is not supported on the object (resource pool)

I have a Terraform file to create a resource pool on my home vSphere instance. The Terraform file looks as follows:
provider "vsphere" {
vsphere_server = "${var.vsphere_server}"
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
allow_unverified_ssl = true
}
data "vsphere_datacenter" "dc" {
name = "Datacenter1"
}
data "vsphere_compute_cluster" "compute_cluster" {
name = "Cluster1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
resource "vsphere_resource_pool" "resource_pool" {
name = "terraform-resource-pool-test"
parent_resource_pool_id = "${data.vsphere_compute_cluster.compute_cluster.resource_pool_id}"
}
The output from terraform plan is the following:
# vsphere_resource_pool.resource_pool will be created
+ resource "vsphere_resource_pool" "resource_pool" {
+ cpu_expandable = true
+ cpu_limit = -1
+ cpu_reservation = 0
+ cpu_share_level = "normal"
+ cpu_shares = (known after apply)
+ id = (known after apply)
+ memory_expandable = true
+ memory_limit = -1
+ memory_reservation = 0
+ memory_share_level = "normal"
+ memory_shares = (known after apply)
+ name = "terraform-resource-pool-test"
+ parent_resource_pool_id = "resgroup-8"
}
Plan: 1 to add, 0 to change, 0 to destroy.
But I always get back the following error:
vsphere_resource_pool.resource_pool: Creating...
Error: ServerFaultCode: The operation is not supported on the object.
on main.tf line 34, in resource "vsphere_resource_pool"
"resource_pool": 34: resource "vsphere_resource_pool"
"resource_pool" {
Any idea on how to solve this? I'm using vSphere Version 6.0.0 Build 3617395
Code looks fine.
Some manual fix will be helpful for this case.
Since it is your own system, it should be fine to clean the tfstate files, otherwise, backup them first.
clean the environment
# clean below folder and files from current directory, where you run `terraform apply`
rm -rf .terraform
rm terraform.tfstate* in any subfolders
# clean below folder from home directory.
rm ~/.terraform.d/
deploy again.
terraform init
terraform plan
terraform apply

Resources