Terraform variable from gitlab CI/CD variables - terraform

I understand that CI/CD variables can be used in HCL by counting on the fact that having them declared them with a TF_VAR_ prefix in the environment will enable me to look them up as input variables, then use them in the .tf file where I need them.
I did:
set my variable via the UI in the GitLab project, as TF_VAR_ibm_api_key, then masked it.
write a variable block for it in main.tf
call it where I need it in the same file main.tf
tried including the variable in variables.tf, same result
read the documentation from gitlab and from terraform, but I'm not getting this right.
This is my main.tf file:
variable ibm_api_key {
}
terraform {
required_version = ">= 0.13"
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
provider "ibm" {
ibmcloud_api_key = var.ibm_api_key
}
Expected behavior: the variable is passed from the CI/CD and added to the HCL code.
Current behavior: during ´plan´, the job falls with error code 1
$ terraform plan
var.ibm_api_key
Enter a value: ╷
│ Error: No value for required variable
│
│ on main.tf line 1:
│ 1: variable ibm_api_key {
│
│ The root module input variable "ibm_api_key" is not set, and has no default
│ value. Use a -var or -var-file command line argument to provide a value for
│ this variable.
╵
although logically it can't seem to be the issue, I tried formatting the variable call as string interpolation, like:
provider "ibm" {
ibmcloud_api_key = "${var.ibm_api_key}"
}
naturally to no avail.
although logically it can't seem to be the issue, I tried defining a type for the variable:
variable ibm_api_key {
type = string
}
naturally to no avail.
In order to check if variables are passed from the CI/CD settings to the gitlab runner's environment, I added a variable that is neither protected nor masked, and assigned string inserted a double check:
echo ${output_check}
echo ${TF_VAR_ibm_api_key}
which does not result in an error, but are not being printed either. Only the "echo" commands appear in the output.
$ echo ${output_check}
$ echo ${TF_VAR_ibm_api_key}
Cleaning up project directory and file based variables 00:01
Job succeeded

Providers typically have intrinsic environment variables configured in their schema and/or associated bindings for authentication. This situation is, according to the provider authentication documentation, no different. You can authenticate the provider with an IBM API key from the GitlabCI project environment variables settings with:
IAAS_CLASSIC_API_KEY="iaas_classic_api_key"

The error was in the CI/CD settings.
The variables were set to be exclusively passed to protected branches. I was pushing my code to an unprotected one, which prevented variables being passed. When merging the code to a protected branch, the variables showed up correctly. Variables are also correctly imported to Terraform, with the expected exclusion of the TF_VAR_ prefix.
TL;DR If you're having this issue in GitLab's CI/CD check your CICD variables' setting for protected branches, and if the branch you're pushing to corresponds to that setting.

Related

How to ignore changes of specific annotation with Terraform CDK

What's the correct way to use the ignoreChanges config to ignore changes of a specific annotation of a kubernetes deployment?
One of my kubernetes deployments has the following annotation automatically injected by a CRD based on some external state change:
metadata:
annotations:
secrets.doppler.com/secretsupdate.api: W/"8673f9c59166f300cacd436f95f83d3379f84643d8259297c18facf0076b50e7"
I'd like terraform to not trigger a redeployment when it sees changes to this annotation.
I suspect something like the following would be correct but I'm not sure what the right syntax is when using terraform cdk:
new k8s.Deployment(this, name, {
lifecycle: {
ignoreChanges: ["metadata.annotations.\"secrets.doppler.com/secretsupdate.api\""],
},
// ...
})
I tried using the above syntax but it didn't work.
│ Error: Invalid expression
│
│ on cdk.tf.json line 2492, in resource.kubernetes_deployment.api_BA7F1523.lifecycle.ignore_changes:
│ 2492: "metadata.annotations.\"secrets.doppler.com/secretsupdate.api\""
│
│ A single static variable reference is required: only attribute access and
│ indexing with constant keys. No calculations, function calls, template
│ expressions, etc are allowed here.
What's the correct syntax for ignore an annotation like this?
As is typical, I figured it out immediately after posting.
metadata[0].annotations[\"secrets.doppler.com/secretsupdate.api\"]

What remedies exist when `terraform validate` returns false positives?

I'm importing a lot of existing infrastructure to Terraform. On multiple occasions (and with various resource types), I've seen issues like the following...
After a successful import of a resource (and manually copying the associated state into my configuration), running terraform validate returns an error due to Terraform argument validation rules that are more restrictive than the provider's actual rules.
Example imported configuration:
resource "aws_athena_database" "example" {
name = "mydatabase-dev"
properties = {}
}
Example validation error:
$ terraform validate
╷
│ Error: invalid value for name (must be lowercase letters, numbers, or underscore ('_'))
│
│ with aws_athena_database.example,
│ on main.tf line 121, in resource "aws_athena_database" "example":
│ 11: name = "mydatabase-dev"
│
Error caused by this provider code:
"name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringMatch(regexp.MustCompile("^[_a-z0-9]+$"), "must be lowercase letters, numbers, or underscore ('_')"),
},
Since terraform validate is failing, terraform plan and terraform apply will also fail. Short of renaming the preexisting resources (which could be disruptive), is there an easy way around this?
To fix this issue, you can use the ignore_changes attribute in your Terraform configuration to tell Terraform to ignore any changes to the name attribute of the aws_athena_database resource. This will allow you to keep the existing name of the resource, even if it does not match the validation rules specified in the provider code. After adding the ignore changes attribute, you can run terraform validate again to verify that the error has been resolved. You can then run terraform plan and terraform apply as usual to apply the changes to your infrastructure.

Terraform outputs 'Error: Variables not allowed' when doing a plan

I've got a variable declared in my variables.tf like this:
variable "MyAmi" {
type = map(string)
}
but when I do:
terraform plan -var 'MyAmi=xxxx'
I get:
Error: Variables not allowed
on <value for var.MyAmi> line 1:
(source code not available)
Variables may not be used here.
Minimal code example:
test.tf
provider "aws" {
}
# S3
module "my-s3" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "${var.MyAmi}-bucket"
}
variables.tf
variable "MyAmi" {
type = map(string)
}
terraform plan -var 'MyAmi=test'
Error: Variables not allowed
on <value for var.MyAmi> line 1:
(source code not available)
Variables may not be used here.
Any suggestions?
This error can also occurs when trying to setup a variable's value from a dynamic resource (e.g: an output from a child module):
variable "some_arn" {
description = "Some description"
default = module.some_module.some_output # <--- Error: Variables not allowed
}
Using locals block instead of the variable will solve this issue:
locals {
some_arn = module.some_module.some_output
}
I had the same error, but in my case I forgot to enclose variable values inside quotes (" ") in my terraform.tfvars file.
This is logged as an issue on the official terraform repository here:
https://github.com/hashicorp/terraform/issues/24391
I see two things that could be causing the error you are seeing. Link to terraform plan documentation.
When running terraform plan, it will automatically load any .tfvars files in the current directory. If your .tfvars file is in another directory you must provide it as a -var-file parameter. You say in your question that your variables are in a file variables.tf which means the terraform plan command will not automatically load that file. FIX: rename variables.tf to variables.tfvars
When using the -var parameter, you should ensure that what you are passing into it will be properly interpreted by HCL. If the variable you are trying to pass in is a map, then it needs to be parse-able as a map.
Instead of terraform plan -var 'MyAmi=xxxx' I would expect something more like terraform plan -var 'MyAmi={"us-east-1":"ami-123", "us-east-2":"ami-456"}'.
See this documentation for more on declaring variables and specifically passing them in via the command line.
I had the same issue, but my problem was the missing quotes around default value of the variable
variable "environment_name" {
description = "Enter Environment name"
default= test
}
This is how I resolved this issues,
variable "environment_name" {
description = "Enter Environment name"
default= "test"
}
Check the terraform version.
I had something similar , the module was written on version 1.0 and I was using terraform version 0.12.
I had this error on Terraform when trying to pass a list into the module including my Data source:
The given value is not suitable for module. ...
In my case I was passing the wrong thing to the module:
security_groups_allow_to_msk_on_port_2181 = concat(var.security_groups_allow_to_msk_2181, [data.aws_security_group.client-vpn-sg])
It expected the id only and not the whole object. So instead this worked for me:
security_groups_allow_to_msk_on_port_2181 = concat(var.security_groups_allow_to_msk_2181, [data.aws_security_group.client-vpn-sg.id])
Also be sure what type of object you are receiving: is it a list? watch out for the types. I had the same error message when the first argument was also enclosed in [] (brackets), since it already was a list.

How can I communicate a parameter value from a gitlab CI/CD pipeline via Terraform to a user_data script in AWS?

I have defined a terraform configuration that sets up an EC2 instance. I use user_data to upload and run a script, which needs a parameter string of some sort - it could be as an environment variable, a small file or whatever. I have put this into gitlab and set up a .gitlab-ci.yml file to define a pipeline with a manual stage, and I defined a variable in gitlab's settings -> CI/CD; this should make the manual step stop and ask me to specify a value for the variable. I know that if I prefix the name of the variable with TF_VAR_, then it will be visible to my terraform scripts.
So, my question is this: I want to use the value in my user_data - is this possible?
The answer, I realised, is template files: You specify a variable, eg. TF_VAR_BACKUP, in gitlab, under 'Settings' -> 'CI/CD' -> 'Variables'. In the terraform script it is visible like this:
variable "BACKUP" {}
...
resource "aws_instance" "bastion" {
ami = "${var.image}"
...
user_data = "${templatefile("${path.module}/bootstrap.tmpl",{BACKUP = ${var.BACKUP}})}"
...
}

terraform error message line numbers

Is there any way to get the line number causing terraform errors? For example:
$ terraform plan
module root: module foo: bar is not a valid parameter
$
Ideally the error message would give me file paths and line numbers corresponding to the error, e.g.
$ terraform plan
File "maint.tf", line 120:
bar = "123"
InvalidParameterError: "bar" is not a valid parameter of module foo
$
I understand not being a procedural language may make this more difficult but not containing a single file path nor line number seems excessive.
Unfortunately, no, there isn't currently a way to make terraform output the error file or line location
This is a known usability issue with terraform, and the maintainers are updating error messages on a case-by-case basis. (see https://github.com/hashicorp/terraform/issues/1758).
Per mitchellh, "error messages are improving," but for now it seems that humans will have to find the errors.
Due to how terraform state is managed, there aren't always line numbers for an error to map to. Syntactical errors should result in a line number, but there are some scenarios where you'll error because of the terraform state (on disk, on in s3, etc).
For example, the following is a valid main.tf file:
terraform { }
So running terraform apply on the above should work right? Yes, unless the terraform state is tracking a resource which still requires a provider.
Let's say your terraform state matches the following main.tf file.
terraform {
required_providers {
foo_provider { ..source and version }
}
}
provider "foo_provider" {
domain = "this is be a required field"
}
resource "foo_resource" {
name = "bar"
}
If you remove everything foo*, the terraform state is still tracking the the foo_resoruce, so you can't just run terraform apply against an empty main.tf file.
Let's say you do anyway. Run terraform apply against the empty main.tf
terraform { }
You will probably get an error like the following The argument "domain" is required, but was not set. ..and there will be no line number! The error can be super generic and have no mention of the resource or provider causing it. It comes from your terraform tracked state, not syntax. You have to remove the resource from terraform's tracked state before removing the provider (and it's required arguments).

Resources