How to ignore changes of specific annotation with Terraform CDK - terraform

What's the correct way to use the ignoreChanges config to ignore changes of a specific annotation of a kubernetes deployment?
One of my kubernetes deployments has the following annotation automatically injected by a CRD based on some external state change:
metadata:
annotations:
secrets.doppler.com/secretsupdate.api: W/"8673f9c59166f300cacd436f95f83d3379f84643d8259297c18facf0076b50e7"
I'd like terraform to not trigger a redeployment when it sees changes to this annotation.
I suspect something like the following would be correct but I'm not sure what the right syntax is when using terraform cdk:
new k8s.Deployment(this, name, {
lifecycle: {
ignoreChanges: ["metadata.annotations.\"secrets.doppler.com/secretsupdate.api\""],
},
// ...
})
I tried using the above syntax but it didn't work.
│ Error: Invalid expression
│
│ on cdk.tf.json line 2492, in resource.kubernetes_deployment.api_BA7F1523.lifecycle.ignore_changes:
│ 2492: "metadata.annotations.\"secrets.doppler.com/secretsupdate.api\""
│
│ A single static variable reference is required: only attribute access and
│ indexing with constant keys. No calculations, function calls, template
│ expressions, etc are allowed here.
What's the correct syntax for ignore an annotation like this?

As is typical, I figured it out immediately after posting.
metadata[0].annotations[\"secrets.doppler.com/secretsupdate.api\"]

Related

What remedies exist when `terraform validate` returns false positives?

I'm importing a lot of existing infrastructure to Terraform. On multiple occasions (and with various resource types), I've seen issues like the following...
After a successful import of a resource (and manually copying the associated state into my configuration), running terraform validate returns an error due to Terraform argument validation rules that are more restrictive than the provider's actual rules.
Example imported configuration:
resource "aws_athena_database" "example" {
name = "mydatabase-dev"
properties = {}
}
Example validation error:
$ terraform validate
╷
│ Error: invalid value for name (must be lowercase letters, numbers, or underscore ('_'))
│
│ with aws_athena_database.example,
│ on main.tf line 121, in resource "aws_athena_database" "example":
│ 11: name = "mydatabase-dev"
│
Error caused by this provider code:
"name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringMatch(regexp.MustCompile("^[_a-z0-9]+$"), "must be lowercase letters, numbers, or underscore ('_')"),
},
Since terraform validate is failing, terraform plan and terraform apply will also fail. Short of renaming the preexisting resources (which could be disruptive), is there an easy way around this?
To fix this issue, you can use the ignore_changes attribute in your Terraform configuration to tell Terraform to ignore any changes to the name attribute of the aws_athena_database resource. This will allow you to keep the existing name of the resource, even if it does not match the validation rules specified in the provider code. After adding the ignore changes attribute, you can run terraform validate again to verify that the error has been resolved. You can then run terraform plan and terraform apply as usual to apply the changes to your infrastructure.

Terraform variable from gitlab CI/CD variables

I understand that CI/CD variables can be used in HCL by counting on the fact that having them declared them with a TF_VAR_ prefix in the environment will enable me to look them up as input variables, then use them in the .tf file where I need them.
I did:
set my variable via the UI in the GitLab project, as TF_VAR_ibm_api_key, then masked it.
write a variable block for it in main.tf
call it where I need it in the same file main.tf
tried including the variable in variables.tf, same result
read the documentation from gitlab and from terraform, but I'm not getting this right.
This is my main.tf file:
variable ibm_api_key {
}
terraform {
required_version = ">= 0.13"
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
provider "ibm" {
ibmcloud_api_key = var.ibm_api_key
}
Expected behavior: the variable is passed from the CI/CD and added to the HCL code.
Current behavior: during ´plan´, the job falls with error code 1
$ terraform plan
var.ibm_api_key
Enter a value: ╷
│ Error: No value for required variable
│
│ on main.tf line 1:
│ 1: variable ibm_api_key {
│
│ The root module input variable "ibm_api_key" is not set, and has no default
│ value. Use a -var or -var-file command line argument to provide a value for
│ this variable.
╵
although logically it can't seem to be the issue, I tried formatting the variable call as string interpolation, like:
provider "ibm" {
ibmcloud_api_key = "${var.ibm_api_key}"
}
naturally to no avail.
although logically it can't seem to be the issue, I tried defining a type for the variable:
variable ibm_api_key {
type = string
}
naturally to no avail.
In order to check if variables are passed from the CI/CD settings to the gitlab runner's environment, I added a variable that is neither protected nor masked, and assigned string inserted a double check:
echo ${output_check}
echo ${TF_VAR_ibm_api_key}
which does not result in an error, but are not being printed either. Only the "echo" commands appear in the output.
$ echo ${output_check}
$ echo ${TF_VAR_ibm_api_key}
Cleaning up project directory and file based variables 00:01
Job succeeded
Providers typically have intrinsic environment variables configured in their schema and/or associated bindings for authentication. This situation is, according to the provider authentication documentation, no different. You can authenticate the provider with an IBM API key from the GitlabCI project environment variables settings with:
IAAS_CLASSIC_API_KEY="iaas_classic_api_key"
The error was in the CI/CD settings.
The variables were set to be exclusively passed to protected branches. I was pushing my code to an unprotected one, which prevented variables being passed. When merging the code to a protected branch, the variables showed up correctly. Variables are also correctly imported to Terraform, with the expected exclusion of the TF_VAR_ prefix.
TL;DR If you're having this issue in GitLab's CI/CD check your CICD variables' setting for protected branches, and if the branch you're pushing to corresponds to that setting.

terraform source_resource_id output

What is the syntax to assign the managed disk id from a newly created virtual machine to an output? I would like to use it as the "source_resource_id" of an "azurerm_managed_disk" resource.
I have tried the following within outputs.tf:
output "manageddisk" {
value = azurerm_virtual_machine.vm.storage_os_disk.[0].managed_disk_id
}
However, this results in the following error:
╷
│ Error: Invalid attribute name
│
│ On outputs.tf line 17: An attribute name is required after a dot.
Not all arguments can be used as attributes as they're two separate things. Terraform's docs distinguish this by having an "Argument Reference" and an "Attributes Reference" section for a given resource; only the attributes of a resource can be accessed.
One pathway is to create an azurerm_managed_disk resource separately, which does output the identifier of the managed disk. You can then use the managed disk's resource's id output as an input to the virtual machine you're provisioning:
output "manageddisk" {
value = azurerm_managed_disk.example.id
}
From here you have the output of the managed disk, so you can use it for whatever you'd like.

Managing slight differences in outcomes with terraform modules

We inherited a terraform + modules layout like this where the databases for an environment (AWS RDS) are provisioned slightly differently depending on whether terraform is invoked on the main branch vs any feature/* branches in our CI/CD pipeliens.
☡ tree -P main.tf
.
├── feature-database
│   ├── dev
│   │   └── main.tf
│   └── modules
│   └── database
│   └── main.tf
└── main-database
├── dev
│   └── main.tf
└── modules
└── database
└── main.tf
8 directories, 4 files
The feature-database module provisions an RDS instance from a snapshot of the RDS instance created by main-database - apart from this difference, everything else in the feature-database module is an exact copy-paste of main-database.
It seems like a code-smell to have 2 very similar modules (i.e. */modules/database/main.tf) that are 95% identical to each other. We have concerns about maintaining, testing, deployments in this approach and want to restructure to make this DRY.
So the questions naturally are.
What would be a good (ideally terraform) way to manage these differences in provisioning depending on the environment? Is conditional execution a possibility or Do we just accept this as an overhead we manage as different mostly-identical modules?
Are there some out-of-the-box solutions with tools/approaches to help with something like this?
Non-idempotent operations such as creating and using snapshots/images are unfortunately not an ideal situation for Terraform's execution model, since they lend themselves more to an imperative execution model ("create a new instance using this particular snapshot" (where this particular is likely to change for each deployment) vs. "there should be an instance").
However, it is possible in principle to write such a thing. Without seeing the details of those modules it's hard to give specific advice, but at a high-level I'd expect to see the unified module have an optional input variable representing a snapshot ID, and then have the module vary its behavior based on whether that variable is set:
variable "source_snapshot_id" {
type = string
# This means that the variable is optional but
# it doesn't have a default value.
default = null
}
resource "aws_db_instance" "example" {
# ...
# If this variable isn't set then the value here
# will be null, which is the same as not setting
# snapshot_identifier at all.
snapshot_identifier = var.source_snapshot_id
}
The root module would then need to call this module twice and wire the result of the first instance into the second instance. Perhaps that would look something like this:
module "main_database" {
source = "../modules/database"
# ...
}
resource "aws_db_snapshot" "example" {
db_instance_identifier = module.main_database.instance_id
db_snapshot_identifier = "${module.main_database.instance_id}-feature-snapshot"
}
module "feature_database" {
source = "../modules/database"
source_snapshot_id = aws_db_snapshot.example.db_snapshot_identifier
# ...
}
On the first apply of this configuration, Terraform would first create the "main database", then immediately create a snapshot of it, and then create the "feature database" using that snapshot. In order for that to be useful the module would presumably need to encapsulate some actions to put some schema and possibly some data into the database, or else the snapshot would just be of an empty database. If those actions involve some other resources alongside the main aws_db_instance then you can encapsulate the correct ordering by declaring additional dependencies on the instance_id output value I presumed in my example above:
output "instance_id" {
# This reference serves as an implicit dependency
# on the DB instance itself.
value = aws_db_instance.example.id
# ...but if you have other resources that arrange
# for the database to have interesting data inside
# it then you'll likely want to declare those
# dependencies too, so that the root module won't
# start trying to create a snapshot until the
# database contents are ready
depends_on = [
aws_db_instance_role_association.example,
null_resource.example,
# ...
]
}
I've focused on the general Terraform patterns here rather than on the specific details of RDS, because I'm not super familiar with these particular resource types, but hopefully even if I got any details wrong above you can still see the general idea here and adapt it to your specific situation.

Referencing Terraform resource created in a different folder

I have the following folder structure:
infrastructure
└───security-groups
│ │ main.tf
│ │ config.tf
│ │. security_groups.tf
│
└───instances
│ main.tf
│ config.tf
│ instances.tf
I would like to reference the security group id instantiated in security-groups folder by reference.
I have tried to output the required ids in the security_groups.tf file with
output "sg_id" {
value = "${aws_security_group.server_sg.id}"
}
And then in the instances file add it as a module:
module "res" {
source = "../security-groups"
}
The problem with this approach is that when I do terraform apply in the instances folder, it tries to create the security groups as well (which I have already created by doing terraform apply in the security-groups folder) and it fails because the SGs are existing.
What would be the easiest way to reference the resources created in a different folder, without changing the structure of the code?
Thank you.
To refer to an existing resource you need to use a data block. You won't refer directly to the resource block in the other folder, but instead specify a name or ID or whatever other unique identifier it has. So for a security group, you would add something like
data "aws_security_group" "sg" {
name = "the-security-group-name"
}
to your instances folder, and refer to that entity to associate your instances with that security group.
You should also consider whether you actually want to be just applying terraform to the whole tree, instead of each folder separately. Then you can refer between all your managed resources directly like you are trying to do, and you don't have to call terraform apply as many times.
While lxop's answer is a better practice, if you really do need to refer to output in another local folder, you can do it like this:
data "terraform_remote_state" "sg" {
backend = "local"
config = {
path = "../security-groups/terraform.tfstate"
}
}
and then refer to it using e.g.
locals {
sgId = data.terraform_remote_state.sg.outputs.sg_id
}

Resources