Does terraform support aws backup feature for cross region copy (https://www.terraform.io/docs/providers/aws/r/backup_plan.html )?
As I read the document I can see that it does support.
But I get the following error:
Error: Unsupported argument
on backup_plan.tf line 11, in resource "aws_backup_plan" "example":
11: copy_action = {
An argument named "copy_action" is not expected here.
My terraform file for your reference
resource "aws_backup_plan" "example" {
name = "example-plan"
rule {
rule_name = "MainRule"
target_vault_name = "primary"
schedule = "cron(5 8 * * ? *)"
start_window = 480
completion_window = 10080
lifecycle {
delete_after = 30
}
copy_action {
destination_vault_arn = "arn:aws:backup:us-west-2:123456789:backup-vault:secondary"
}
}
}
But when I remove the block
copy_action {
destination_vault_arn = "arn:aws:backup:us-west-2:123456789:backup-vault:secondary"
}
It works just fine
Thanks
I assume you are running a version of the Terraform AWS Provider of 2.57.0 or older.
Version 2.58.0 (released 3 days ago) brought support for the copy_action:
resource/aws_backup_plan: Add rule configuration block copy_action configuration block (support cross region copy)
You can specify in your code to require at least this version as follows:
provider "aws" {
version = "~> 2.58.0"
}
Related
I am working on creating Azure landing zone and part of that is to enable/disable resource providers on the newly created subscriptions.
I have tried to used alias with a variable but i am getting error that i cant use variable in an alias so is there any way through which i can use this feature on multiple subscription
This is my code main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
#list of providers i want to register
locals {
# List is compiled from here
# https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-services-resource-providers
provider_list = [
"Microsoft.Storage"
]
provider_map = { for p in local.provider_list : p => p }
}
# Registering a default provider here and skipping registration
# as i will do it later
provider "azurerm" {
features {}
skip_provider_registration = true
}
# I am creating a subscription here with same alias as the name
# the subscription is being created under and EA enrollment but
# any type of subscription will do
resource "azurerm_subscription" "feature_subscription" {
billing_scope_id = "/providers/Microsoft.Billing/billingAccounts/xxx/enrollmentAccounts/xx"
alias = var.temp_alias # "test-provider-registration"
subscription_name = "test-provider-registration"
}
#this is what i have created to point out my azurerm_resource_provider_registration
#module i am using variable in alias which is failing
provider "azurerm" {
alias = var.temp_alias
subscription_id = azurerm_subscription.feature_subscription.id
features {
}
skip_provider_registration = true
}
#module through which i am registering the resource providers
module "azurerm_resource_provider_registration-provider" {
source = "../modules/azurerm_resource_provider_registration"
providers = {
azurerm = azurerm.test-provider-registration
}
feature_list = local.provider_map
}
#the module code is mentioned here
#resource "azurerm_resource_provider_registration" "provider" {
# for_each = var.feature_list
# name = each.value
#}
I am getting this error when i run it
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
Error: Variables not allowed
│
On main.tf line 25: Variables may not be used here.
╵
╷
Error: Unsuitable value type
│
On main.tf line 25: Unsuitable value: value must be known
There is a workaround available like using this
resource "null_resource" "provider_registration" {
for_each = local.provider_map
provisioner "local-exec" {
command = "az account set --subscription ${data.azurerm_subscription.subscription.subscription_id} && az provider register --namespace ${each.value}"
}
}
but i want to use the state file for the resource registration if possible as i have more subscriptions in a loop
Error: Unsuitable value type | Unsuitable value: value must be known
Need to check:
This problem usually occurs with module sources or versions. When invoking a module, using variables instead of passing direct values for the source and some other arguments causes this error. terraform initdoesn't take variable inputs with modules in backend state configuration.
Note: And also include required version for modules.
To make it work, use direct values for module block instead of accessing from other resources.
Pass provider explicitly to make it work as detailed in terraform providers.
providers = {
azurerm = azurerm.test-provider-registration
}
feature_list = local.provider_map
}
}
module "azurerm_resource_provider_registration-provider" {
source ="../modules/azurerm_resource_provider_registration"
version = <required version> //Use latest one
After checking the above conditions, I tried same in my environment and it was initialized successfully:
Refer aliasing for the deployment in multiple subscriptions detailed by #Jeff Brown
I have the following config:
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.25.0"
}
databricks = {
source = "databricks/databricks"
version = "1.4.0"
}
}
}
provider "azurerm" {
alias = "uat-sub"
features {}
subscription_id = "sfsdf"
}
provider "databricks" {
host = "https://abd-1234.azuredatabricks.net"
token = "sdflkjsdf"
alias = "dev-dbx-provider"
}
resource "databricks_cluster" "dev_cluster" {
cluster_name = "xyz"
spark_version = "10.4.x-scala2.12"
}
I am able to successfully import databricks_cluster.dev_cluster. Once imported, I update my config to output a value from the cluster in state. The updated config looks like this:
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.25.0"
}
databricks = {
source = "databricks/databricks"
version = "1.4.0"
}
}
}
provider "azurerm" {
alias = "uat-sub"
features {}
subscription_id = "sfsdf"
}
provider "databricks" {
host = "https://abd-1234.azuredatabricks.net"
token = "sdflkjsdf"
alias = "dev-dbx-provider"
}
resource "databricks_cluster" "dev_cluster" {
cluster_name = "xyz"
spark_version = "10.4.x-scala2.12"
}
output "atm"{
value = databricks_cluster.dev_cluster.autotermination_minutes
}
When I run terraform apply on the updated config, terrform proceeds to refresh my imported cluster and detects changes and does an 'update-in-place' where some of the values on my cluster are set null (autoscale/pyspark_env etc). All this happens when no changes are actually being made on the cluster. Why is this happening? Why is terraform resetting some values when no changes have been made?
EDIT- 'terraform plan' output:
C:\Users\>terraform plan
databricks_cluster.dev_cluster: Refreshing state... [id=gyht]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# databricks_cluster.dev_cluster will be updated in-place
~ resource "databricks_cluster" "dev_cluster" {
~ autotermination_minutes = 10 -> 60
- data_security_mode = "NONE" -> null
id = "gyht"
~ spark_env_vars = {
- "PYSPARK_PYTHON" = "/databricks/python3/bin/python3" -> null
}
# (13 unchanged attributes hidden)
- autoscale {
- max_workers = 8 -> null
- min_workers = 2 -> null
}
- cluster_log_conf {
- dbfs {
- destination = "dbfs:/cluster-logs" -> null
}
}
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
EDIT - Work around with hard coded tags:
resource "databricks_cluster" "dev_cluster" {
cluster_name = "xyz"
spark_version = "10.4.x-scala2.12"
autotermination_minutes = 10
data_security_mode = "NONE"
autoscale {
max_workers = 8
min_workers = 2
}
cluster_log_conf {
dbfs {
destination = "dbfs:/cluster-logs"
}
}
spark_env_vars = {
PYSPARK_PYTHON = "/databricks/python3/bin/python3"
}
}
The workaround partially works as I no longer see terraform trying to reset the tags on every apply. But if I were to change any of the tags on the cluster, lets says I change max workers to 5, terraform will not update state to reflect 5 workers. TF will override 5 with the hard coded 8, which is an issue.
To answer your first part of your question, Terraform has imported the actual values of your cluster into the state file but it cannot import those values into your config file (.hcl) for you so you need to specify them manually (as you have done).
By not setting the optional fields, you are effectively saying "set those fields to the default value" which in most cases is null (with the exception of the autotermination_minutes field which has a default of 60), which is why Terraform detects a drift between your state and your config. (actual values from import vs. the default values of the unspecified fields).
For reference : https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster
For the second part of your question, you say
lets says I change max workers to 5, terraform will not update state to reflect 5 workers.
if you mean you change the max workers from outside of Terraform, then Terraform is designed to override that field when you run terraform apply. When working with Terraform, if you want to make a change to your infrastructure, you always want to make the changes in your Terraform config and run terraform apply to make those changes for you.
So in your case if you wanted to change the max_workers to 5, you would set that value in the terraform config and run terraform apply. You would not do it from within Databricks. If that behaviour is problematic I would question whether you want to manage that resource with Terraform, as that is always how Terraform will work.
Hope that helps!
This is regarding the max_worker tag changes, hope you have the var.tf file and if you had mentioned var "max" {default=8} in var.tf.
Then you can override this value explicitly by providing the required value while applying plan such as terraform plan -var="max=5" and you can check in the plan output.
:)
According to the documentation on Terraform.io for azurerm_cosmosdb_sql_container, it says I can include an indexing_policy block. However, when I run terraform plan I get errors:
Error: Unsupported block type
on main.tf line 912, in resource "azurerm_cosmosdb_sql_container"
"AccountActivity": 912: indexing_policy {
Blocks of type "indexing_policy" are not expected here.
main.tf
resource "azurerm_cosmosdb_sql_container" "AccountActivity" {
name = "AccountActivity"
resource_group_name = azurerm_resource_group.backendResourceGroup.name
account_name = azurerm_cosmosdb_account.AzureCosmosAccount.name
database_name = azurerm_cosmosdb_sql_database.AzureCosmosDbCache.name
default_ttl = 2592000
throughput = 2500
indexing_policy {
indexing_mode = "Consistent"
included_path {
path = "/*"
}
excluded_path {
path = "/\"_etag\"/?"
}
}
}
Here is my terraform version output:
terraform version
Terraform v0.13.4
+ provider registry.terraform.io/-/azurerm v2.30.0
+ provider registry.terraform.io/hashicorp/azurerm v2.20.0
+ provider registry.terraform.io/hashicorp/random v2.3.0
After searching GitHub, I finally found that support for the indexing_policy block was added in this commit 26 days ago. The documentation doesn't mention this, nor does the release notes for azurerm v2.31.1. After updating my main.tf file with the latest version for azurerm and running terraform init the terraform plan command worked without issue.
provider "azurerm" {
version = "~>2.31.1"
features {}
}
I'am trying to create a google cloud sql instance via terraform and i have to enable point in time recovery option but I have the following error :
Error: Unsupported argument
on cloud-sql.tf line 39, in resource "google_sql_database_instance" "si_geny_postgres_logfaces":
39: point_in_time_recovery_enabled = true
An argument named "point_in_time_recovery_enabled" is not expected here.
here is my terraform file :
resource "google_sql_database_instance" "si_geny_postgres_logfaces" {
project = google_project.current_project.project_id
region = var.region
name = "si-sql-instance"
database_version = "POSTGRES_12"
lifecycle {
prevent_destroy = true
ignore_changes = [
settings[0].disk_size, name
]
}
settings {
tier = "db-custom-2-7680"
availability_type = "REGIONAL"
ip_configuration {
ipv4_enabled = false
private_network = data.google_compute_network.si_shared_vpc.self_link
}
location_preference {
zone = var.gce_zone
}
#disk
disk_type = "PD_SSD"
disk_autoresize = true
disk_size = 10 #GB
backup_configuration {
binary_log_enabled = false
point_in_time_recovery_enabled = true
enabled = true
start_time = "00:00" // backup at midnight (GMT)
location = var.region // Custom Location for backups => BACKUP REGION
}
maintenance_window {
day = 1
hour = 3
update_track = "stable"
}
}
}
main.tf
terraform {
required_version = ">0.12.18"
}
provider "google" {
version = "=3.20.0"
project = var.project_id
region = var.region
zone = var.gce_zone
}
provider "google-beta" {
version = "=3.20.0"
project = var.project_id
region = var.region
zone = var.gce_zone
}
Any idea please?
Typically when you get these:
An argument named "..." is not expected here.
issues on terraform. First thing to check is that your file is correct and the property in the error is actually listed in the docs (which this one is).
Next thing is to check that your using the latest version of the provider. As properties are introduced they get added to the documentation but it's not always obvious which version of the provider they were added. You can check to see whichever is the latest provider from the release notes.
So you should upgrade your provider version to the latest (3.40.0) as of time of writing:
provider "google" {
version = "=3.40.0"
project = var.project_id
region = var.region
zone = var.gce_zone
}
I have scale down issue on my GKE cluster and found out with the right configuration I can solve this.
As the terraform documentation I can use the arguement autoscaling_profile and set it to OPTIMIZE_UTILIZATION
Like so :
resource "google_container_cluster" "k8s_cluster" {
[...]
cluster_autoscaling {
enabled = true
autoscaling_profile = "OPTIMIZE_UTILIZATION"
resource_limits {
resource_type = "cpu"
minimum = 1
maximum = 4
}
resource_limits {
resource_type = "memory"
minimum = 4
maximum = 16
}
}
}
But I got this error :
Error: Unsupported argument on modules/gke/main.tf line 70, in resource "google_container_cluster" "k8s_cluster":
70: autoscaling_profile = "OPTIMIZE_UTILIZATION"
An argument named "autoscaling_profile" is not expected here.
I don't get it ?
TL;DR
Add below parameter to the definition of your resource (at the top):
provider = google-beta
More explanation:
autoscaling_profile as shown in the documentation is a beta feature. This means that it will need to use different provider: google-beta.
You can read more about it by following official documentation:
Terraform.io: Using the google beta provider
Focusing on most important parts from above docs:
How to use it:
To use the google-beta provider, simply set the provider field on each resource where you want to use google-beta.
resource "google_compute_instance" "beta-instance" {
provider = google-beta
# ...
}
Disclaimer about usage of google and google-beta:
If the provider field is omitted, Terraform will implicitly use the google provider by default even if you have only defined a google-beta provider block.
Adding to the whole explanation your GKE cluster definition should look like this:
resource "google_container_cluster" "k8s_cluster" {
[...]
provider = google-beta # <- HERE IT IS
cluster_autoscaling {
enabled = true
autoscaling_profile = "OPTIMIZE_UTILIZATION"
resource_limits {
resource_type = "cpu"
minimum = 1
maximum = 4
}
resource_limits {
resource_type = "memory"
minimum = 4
maximum = 16
}
}
}
You will also need to run:
$ terraform init