Terraform not accepting allow_major_version_upgrade - amazon-rds

I have the following rds definition:
resource "aws_db_instance" "my-rds" {
allocated_storage = 20
engine = "mysql"
engine_version = "5.5.57"
instance_class = "db.t2.micro"
identifier = "my-db"
name = "somename"
username = "${var.RDS_USERNAME}" # username
password = "${var.RDS_PASSWORD}" # password
db_subnet_group_name = "${aws_db_subnet_group.some-subnet-group.name}"
parameter_group_name = "${aws_db_parameter_group.some-rds-parameter-group.name}"
multi_az = "false"
vpc_security_group_ids = ["${aws_security_group.some-sg.id}"]
storage_type = "gp2"
skip_final_snapshot = true
backup_retention_period = 30 # how long we re going to keep your backups
availability_zone = "${aws_subnet.some-private-1.availability_zone}"
tags {
Name = "some-tag-name"
}
}
So I am just adding:
allow_major_version_upgrade = true
... and getting
Error: Error applying plan:
1 error(s) occurred:
* aws_db_instance.my-rds: 1 error(s) occurred:
* aws_db_instance.my-rds: Error modifying DB Instance my-db: InvalidParameterCombination: No modifications were requested
status code: 400, request id: 2aed626f-6063-4b69-ac37-654bd783fd37
?

This may be related to this github issue or this one or other similar issues. Seems there is an issue with pending modifications versus applying them immediately. For example if I set up a DB like in your question and attempt to set allow_major_version_upgrade = true it fails the first run with the same error but the change happens and running apply again has no pending changes. However, if I also set apply_immediately = true it works on the first run without an error.
In addition when attempting to reproduce this I noticed that invalid parameters also produce this error, such as trying to specify an engine version that doesn't exist when changing the engine_version.

Related

terraform create global rds instance won't work

I’m trying to get this tutorial to work https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_global_cluster#attributes-reference
Currently the configuration I’m using looks like this, which is failing when I use terraform plan to see how it will work.
resource "aws_rds_global_cluster" "example" {
global_cluster_identifier = "global-test"
database_name = "example"
engine = "aurora-postgresql"
engine_version = "12.6"
}
resource "aws_rds_cluster" "primary" {
# provider = aws.primary
count = "${local.resourceCount == "2" ? "1" : "0"}"
identifier = "kepler-example-global-cluster-${lookup(var.kepler-env-name, var.env)}-${count.index}"
database_name = "example"
engine = "aurora-postgresql"
engine_version = "12.6"
vpc_security_group_ids = ["${var.rds_security_group_survey_id}"]
# cluster_identifier = aws_rds_global_cluster.example_global_cluster.id
master_username = "root"
master_password = "somepass123"
# master_password = var.credential
global_cluster_identifier = aws_rds_global_cluster.example.id
db_subnet_group_name = var.db_subnet_group_id
}
At the moment I’m getting this error currently,
Error: Error loading modules: module openworld: Error parsing .terraform/modules/d375d2d1997599063f4fb9e7587fec26/main.tf: At 63:31: Unknown token: 63:31 IDENT aws_rds_global_cluster.example.id
I’m confused why this error would come up

Terraform keeps destroying existing resource

Im trying to debug why my Terraform script is not working. Due an unknown reason Terraform keeps destroying my MySQL database and recreates it after that.
Below is the output of the execution plan:
# azurerm_mysql_server.test01 will be destroyed
- resource "azurerm_mysql_server" "test01" {
- administrator_login = "me" -> null
- auto_grow_enabled = true -> null
- backup_retention_days = 7 -> null
- create_mode = "Default" -> null
- fqdn = "db-test01.mysql.database.azure.com" -> null
- geo_redundant_backup_enabled = false -> null
- id = "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01" -> null
- infrastructure_encryption_enabled = false -> null
- location = "westeurope" -> null
- name = "db-test01" -> null
- public_network_access_enabled = true -> null
- resource_group_name = "production-rg" -> null
- sku_name = "B_Gen5_1" -> null
- ssl_enforcement = "Disabled" -> null
- ssl_enforcement_enabled = false -> null
- ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled" -> null
- storage_mb = 51200 -> null
- tags = {} -> null
- version = "8.0" -> null
- storage_profile {
- auto_grow = "Enabled" -> null
- backup_retention_days = 7 -> null
- geo_redundant_backup = "Disabled" -> null
- storage_mb = 51200 -> null
}
- timeouts {}
}
# module.databases.module.test.azurerm_mysql_server.test01 will be created
+ resource "azurerm_mysql_server" "test01" {
+ administrator_login = "me"
+ administrator_login_password = (sensitive value)
+ auto_grow_enabled = true
+ backup_retention_days = 7
+ create_mode = "Default"
+ fqdn = (known after apply)
+ geo_redundant_backup_enabled = false
+ id = (known after apply)
+ infrastructure_encryption_enabled = false
+ location = "westeurope"
+ name = "db-test01"
+ public_network_access_enabled = true
+ resource_group_name = "production-rg"
+ sku_name = "B_Gen5_1"
+ ssl_enforcement = (known after apply)
+ ssl_enforcement_enabled = false
+ ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled"
+ storage_mb = 51200
+ version = "8.0"
+ storage_profile {
+ auto_grow = (known after apply)
+ backup_retention_days = (known after apply)
+ geo_redundant_backup = (known after apply)
+ storage_mb = (known after apply)
}
}
As far as i know all is exactly the same. To prevent this i also did a manually terraform import to sync the state with the remote state.
The actually resource as defined in my main.tf
resource "azurerm_mysql_server" "test01" {
name = "db-test01"
location = "West Europe"
resource_group_name = var.rg
administrator_login = "me"
administrator_login_password = var.root_password
sku_name = "B_Gen5_1"
storage_mb = 51200
version = "8.0"
auto_grow_enabled = true
backup_retention_days = 7
geo_redundant_backup_enabled = false
infrastructure_encryption_enabled = false
public_network_access_enabled = true
ssl_enforcement_enabled = false
}
The other odd thing is that below command will output that all is actually in sync?
➜ terraform git:(develop) ✗ terraform plan --refresh-only
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/firstklas-production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
No changes. Your infrastructure still matches the configuration.
After an actual import the same still happens even though the import states all is in state:
➜ terraform git:(develop) ✗ terraform import azurerm_mysql_server.test01 /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
azurerm_mysql_server.test01: Importing from ID "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01"...
azurerm_mysql_server.test01: Import prepared!
Prepared azurerm_mysql_server for import
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
What can i do to prevent this destroy? Or even figure out why the actually destroy is triggered? This is happening on multiple azure instances at this point.
NOTE: the subscription ID is spoofed so don't worry
Best,
Pim
Your plan output shows that Terraform is seeing two different resource addresses:
# azurerm_mysql_server.test01 will be destroyed
# module.databases.module.test.azurerm_mysql_server.test01 will be created
Notice that the one to be created is in a nested module, not in the root module.
If your intent is to import this object to the address that is shown as needing to be created above, you'll need to specify this full address in the terraform import command:
terraform import 'module.databases.module.test.azurerm_mysql_server.test01' /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
The terraform import command tells Terraform to bind an existing remote object to a particular Terraform address, and so when you use it you need to be careful to specify the correct Terraform address to bind to.
In your case, you told Terraform to bind the object to a hypothetical resource "azurerm_mysql_server" "test01" block in the root module, but your configuration has no such block and so when you ran terraform plan Terraform assumed that you wanted to delete that object, because deleting a resource block is how we typically tell Terraform that we intend to delete something.
There is a way.
user
Plan
Apply
terraform state rm "resource_name" --------This will eliminate or remove resource from current state
next Apply
Worked perfectly on GCP for creating 2 successive VM using same TF script.
Only thing is we need to write/code to get current resources and store somewhere and create commands in config: require blocks for upstream dependencies #3. While destroying we can add back using terraform state mv "resource_name"
Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states)
Hope this helps.

Terraform unsupported attribute error using Scaleway

After I run terraform apply and type 'yes' I get the following error 3 times (since I have 3 null resources):
Error: Unsupported attribute: This value does not have any attributes.
I checked each of my entries in my connection block and it seems to be coming from the host attribute. I believe the error is because ips.address is only generated after the server has launched while terraform wants a value for host before the BareMetal server has been deployed. Is there something wrong I'm doing here, either I'm using the wrong value (I've tried ips.id also) or I need to create some sort of output for when ips.address has been generated and then check host. I haven't been able to find any resources on BareMetal provisioning in ScaleWay. Here is my code with instance_number = 3.
provider "scaleway" {
access_key = var.ACCESS_KEY
secret_key = var.SECRET_KEY
organization_id = var.ORGANIZATION_ID
zone = "fr-par-2"
region = "fr-par"
}
resource "scaleway_account_ssh_key" "main" {
name = "main"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "scaleway_baremetal_server" "base" {
count = var.instance_number
name = "${var.env_name}-BareMetal-${count.index}"
offer = var.baremetal_type
os = var.baremetal_image
ssh_key_ids = [scaleway_account_ssh_key.main.id]
tags = [ "BareMetal-${count.index}" ]
}
resource "null_resource" "ssh" {
count = var.instance_number
connection {
type = "ssh"
private_key = file("~/.ssh/id_rsa")
user = "root"
password = ""
host = scaleway_baremetal_server.base[count.index].ips.address
port = 22
}
provisioner "remote-exec" {
script = "provision/install_java_python.sh"
}
}

How to use resource variable if resource was not created in Terraform

I want to enable or disable encryption RDS with Customer managed key by adding variable create_kms_key but always getting an error Resource 'aws_kms_key.ami-kms-key' not found for variable 'aws_kms_key.ami-kms-key.arn' when a resource "aws_kms_key" is not created.
create_kms_key = false
resource "aws_kms_key" "ami-kms-key" {
count = "${var.create_kms_key ? 1 : 0}"
description = "ami-kms-key"
enable_key_rotation = true
}
resource "aws_db_instance" "default" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7.19"
instance_class = "db.t2.micro"
name = "encrypteddb"
username = "admin"
password = "admin"
storage_encrypted = true
kms_key_id = "${aws_kms_key.ami-kms-key.arn}"
}
I have tried kms_key_id = "${var.create_kms_key ? aws_kms_key.ami-kms-key.arn : "" }" and it didn't help
I don't want to create kms key every time I running terraform.
I expect to use default kms/not encrypted RDS or encrypted with Customer managed key according to create_kms_key variable.
How to skip kms_key_idin resource?
Thanks!

WIth terraform, adding encrypted non-root volume in a launch template fails on plan

This works:
resource "aws_launch_template" "instances" {
...
block_device_mappings {
device_name = "/dev/xvdb"
ebs {
volume_type = "gp2"
volume_size = 250
delete_on_termination = true
}
}
But, when I try to add in this:
block_device_mappings {
device_name = "/dev/xvdb"
ebs {
volume_type = "gp2"
volume_size = 250
delete_on_termination = true
encrypted = true
kms_key_id = "${data.aws_kms_key.instances.id}"
}
}
So, I can't add the encryption pieces. The key exists, is enabled, and has permissions for accessing it. When i remove the encryption lines, plan runs to completion and so evidently it would apply.
terraform plan shows this:
Error: Error running plan: 1 error(s) occurred:
* module.asg_instances.aws_autoscaling_group.instances_asg: 1 error(s) occurred:
* module.asg_instances.aws_autoscaling_group.instances_asg: Resource 'aws_launch_template.instances_lt' not found for variable 'aws_launch_template.instances_lt.id'
The code for the asg is:
resource "aws_autoscaling_group" "instances_asg" {
max_size = 5
min_size = 2
min_elb_capacity = 2
health_check_grace_period = 300
health_check_type = "ELB"
desired_capacity = 3
force_delete = false
vpc_zone_identifier = ["${data.aws_subnet_ids.instances_subnets.*.id}"]
load_balancers = ["${aws_elb.instances_elb.name}"]
launch_template {
id = "${aws_launch_template.instances_lt.id}"
version = "$$Latest"
}
lifecycle {
create_before_destroy = true
}
}
Evidently, the launch template doesn't even get created when i have the encryption lines causing the reference to it in the ASG to fail. It doesn't error out on the launch template not even getting created, which it should.
Intention is to create an ASG based on this launch template which creates instances with an encrypted non-root volume
Any ideas what I've done wrong?
How can two people write similar code and make the same mistake? LOL
I came across this post and I had very similar code. I managed to debug and fix it. The problem is that this code is wrong.
Blockquote
kms_key_id = "${data.aws_kms_key.instances.id}"
It should be :
Blockquote
kms_key_id = "${data.aws_kms_key.instances.arn}"
It may come in handy for someone else. Hence, posting it.

Resources