What is the syntax to assign the managed disk id from a newly created virtual machine to an output? I would like to use it as the "source_resource_id" of an "azurerm_managed_disk" resource.
I have tried the following within outputs.tf:
output "manageddisk" {
value = azurerm_virtual_machine.vm.storage_os_disk.[0].managed_disk_id
}
However, this results in the following error:
╷
│ Error: Invalid attribute name
│
│ On outputs.tf line 17: An attribute name is required after a dot.
Not all arguments can be used as attributes as they're two separate things. Terraform's docs distinguish this by having an "Argument Reference" and an "Attributes Reference" section for a given resource; only the attributes of a resource can be accessed.
One pathway is to create an azurerm_managed_disk resource separately, which does output the identifier of the managed disk. You can then use the managed disk's resource's id output as an input to the virtual machine you're provisioning:
output "manageddisk" {
value = azurerm_managed_disk.example.id
}
From here you have the output of the managed disk, so you can use it for whatever you'd like.
Related
I'm importing a lot of existing infrastructure to Terraform. On multiple occasions (and with various resource types), I've seen issues like the following...
After a successful import of a resource (and manually copying the associated state into my configuration), running terraform validate returns an error due to Terraform argument validation rules that are more restrictive than the provider's actual rules.
Example imported configuration:
resource "aws_athena_database" "example" {
name = "mydatabase-dev"
properties = {}
}
Example validation error:
$ terraform validate
╷
│ Error: invalid value for name (must be lowercase letters, numbers, or underscore ('_'))
│
│ with aws_athena_database.example,
│ on main.tf line 121, in resource "aws_athena_database" "example":
│ 11: name = "mydatabase-dev"
│
Error caused by this provider code:
"name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringMatch(regexp.MustCompile("^[_a-z0-9]+$"), "must be lowercase letters, numbers, or underscore ('_')"),
},
Since terraform validate is failing, terraform plan and terraform apply will also fail. Short of renaming the preexisting resources (which could be disruptive), is there an easy way around this?
To fix this issue, you can use the ignore_changes attribute in your Terraform configuration to tell Terraform to ignore any changes to the name attribute of the aws_athena_database resource. This will allow you to keep the existing name of the resource, even if it does not match the validation rules specified in the provider code. After adding the ignore changes attribute, you can run terraform validate again to verify that the error has been resolved. You can then run terraform plan and terraform apply as usual to apply the changes to your infrastructure.
I understand that CI/CD variables can be used in HCL by counting on the fact that having them declared them with a TF_VAR_ prefix in the environment will enable me to look them up as input variables, then use them in the .tf file where I need them.
I did:
set my variable via the UI in the GitLab project, as TF_VAR_ibm_api_key, then masked it.
write a variable block for it in main.tf
call it where I need it in the same file main.tf
tried including the variable in variables.tf, same result
read the documentation from gitlab and from terraform, but I'm not getting this right.
This is my main.tf file:
variable ibm_api_key {
}
terraform {
required_version = ">= 0.13"
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
provider "ibm" {
ibmcloud_api_key = var.ibm_api_key
}
Expected behavior: the variable is passed from the CI/CD and added to the HCL code.
Current behavior: during ´plan´, the job falls with error code 1
$ terraform plan
var.ibm_api_key
Enter a value: ╷
│ Error: No value for required variable
│
│ on main.tf line 1:
│ 1: variable ibm_api_key {
│
│ The root module input variable "ibm_api_key" is not set, and has no default
│ value. Use a -var or -var-file command line argument to provide a value for
│ this variable.
╵
although logically it can't seem to be the issue, I tried formatting the variable call as string interpolation, like:
provider "ibm" {
ibmcloud_api_key = "${var.ibm_api_key}"
}
naturally to no avail.
although logically it can't seem to be the issue, I tried defining a type for the variable:
variable ibm_api_key {
type = string
}
naturally to no avail.
In order to check if variables are passed from the CI/CD settings to the gitlab runner's environment, I added a variable that is neither protected nor masked, and assigned string inserted a double check:
echo ${output_check}
echo ${TF_VAR_ibm_api_key}
which does not result in an error, but are not being printed either. Only the "echo" commands appear in the output.
$ echo ${output_check}
$ echo ${TF_VAR_ibm_api_key}
Cleaning up project directory and file based variables 00:01
Job succeeded
Providers typically have intrinsic environment variables configured in their schema and/or associated bindings for authentication. This situation is, according to the provider authentication documentation, no different. You can authenticate the provider with an IBM API key from the GitlabCI project environment variables settings with:
IAAS_CLASSIC_API_KEY="iaas_classic_api_key"
The error was in the CI/CD settings.
The variables were set to be exclusively passed to protected branches. I was pushing my code to an unprotected one, which prevented variables being passed. When merging the code to a protected branch, the variables showed up correctly. Variables are also correctly imported to Terraform, with the expected exclusion of the TF_VAR_ prefix.
TL;DR If you're having this issue in GitLab's CI/CD check your CICD variables' setting for protected branches, and if the branch you're pushing to corresponds to that setting.
I am trying to create dependency between multiple sub modules which should be able to create the resource individually as well as should be able to create the resource if they are dependent on each other.
basically i am trying to create multiple VMs, and based on the ip addresses and vip ip address returned as the output i want to create the lbaas pool and lbaas pool members.
i have kept the project structure as below
- Root_Folder
- main.tf // create all the vm's
- output.tf
- variable.tf
- calling_module.tf
- modules
- lbaas-pool
- lbaas-pool.tf // create lbaas pool
- variable.tf
- output.tf
- lbaas-pool-members
- lbaas-pool-members.tf // create lbaas pool member
- variable.tf
- output.tf
calling_module.tf contains the reference to the lbaas-pool module and lbaas-pool-members, as these 2 modules are dependent on the output of the resource generated by main.tf file.
It is giving below error:
A managed resource has not been declared.
As the resource has not been generated yet, and while running terraform plan and apply command is trying to load the resource object which has not been created. Not sure with his structure declare the module implicit dependency between the resources so the module can work individually as well as when required the complete stack.
Expected behaviour:
main.tf output parameters should be create the dependency automatically in the terraform version 0.14 but seems like that is not the case from the above error.
Let's say you have a module that takes an instance ID as an input, so in modules/lbaas-pool you have this inside variable.tf
variable "instance_id" {
type = string
}
Now let's say you define that instance resource in main.tf:
resource "aws_instance" "my_instance" {
...
}
Then to pass that resource to any modules defined in calling_module.tf (or in any other .tf file in the same folder as main.tf), you would do so like this:
module "lbaas-pool" {
src="modules/lbaas-pool"
instance_id = aws_instance.my_instance.id
...
}
Notice how there is no output defined at all here. Any output at the root level is for exposing outputs to the command line console, not for sending things to child modules.
Also notice how there is no data source defined here. You are not writing a script that will run in a specific order, you are writing templates that tell Terraform what you want your final infrastructure to look like. Terraform reads all that, creates a dependency graph, and then deploys everything in the order it determines. At the time of running terraform plan or apply anything you reference via a data source has to already exist. Terraform doesn't create everything in the root module, then load the submodule and create everything there, it creates things in whatever order is necessary based on the dependency graph.
I have the following folder structure:
infrastructure
└───security-groups
│ │ main.tf
│ │ config.tf
│ │. security_groups.tf
│
└───instances
│ main.tf
│ config.tf
│ instances.tf
I would like to reference the security group id instantiated in security-groups folder by reference.
I have tried to output the required ids in the security_groups.tf file with
output "sg_id" {
value = "${aws_security_group.server_sg.id}"
}
And then in the instances file add it as a module:
module "res" {
source = "../security-groups"
}
The problem with this approach is that when I do terraform apply in the instances folder, it tries to create the security groups as well (which I have already created by doing terraform apply in the security-groups folder) and it fails because the SGs are existing.
What would be the easiest way to reference the resources created in a different folder, without changing the structure of the code?
Thank you.
To refer to an existing resource you need to use a data block. You won't refer directly to the resource block in the other folder, but instead specify a name or ID or whatever other unique identifier it has. So for a security group, you would add something like
data "aws_security_group" "sg" {
name = "the-security-group-name"
}
to your instances folder, and refer to that entity to associate your instances with that security group.
You should also consider whether you actually want to be just applying terraform to the whole tree, instead of each folder separately. Then you can refer between all your managed resources directly like you are trying to do, and you don't have to call terraform apply as many times.
While lxop's answer is a better practice, if you really do need to refer to output in another local folder, you can do it like this:
data "terraform_remote_state" "sg" {
backend = "local"
config = {
path = "../security-groups/terraform.tfstate"
}
}
and then refer to it using e.g.
locals {
sgId = data.terraform_remote_state.sg.outputs.sg_id
}
I've got a variable declared in my variables.tf like this:
variable "MyAmi" {
type = map(string)
}
but when I do:
terraform plan -var 'MyAmi=xxxx'
I get:
Error: Variables not allowed
on <value for var.MyAmi> line 1:
(source code not available)
Variables may not be used here.
Minimal code example:
test.tf
provider "aws" {
}
# S3
module "my-s3" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "${var.MyAmi}-bucket"
}
variables.tf
variable "MyAmi" {
type = map(string)
}
terraform plan -var 'MyAmi=test'
Error: Variables not allowed
on <value for var.MyAmi> line 1:
(source code not available)
Variables may not be used here.
Any suggestions?
This error can also occurs when trying to setup a variable's value from a dynamic resource (e.g: an output from a child module):
variable "some_arn" {
description = "Some description"
default = module.some_module.some_output # <--- Error: Variables not allowed
}
Using locals block instead of the variable will solve this issue:
locals {
some_arn = module.some_module.some_output
}
I had the same error, but in my case I forgot to enclose variable values inside quotes (" ") in my terraform.tfvars file.
This is logged as an issue on the official terraform repository here:
https://github.com/hashicorp/terraform/issues/24391
I see two things that could be causing the error you are seeing. Link to terraform plan documentation.
When running terraform plan, it will automatically load any .tfvars files in the current directory. If your .tfvars file is in another directory you must provide it as a -var-file parameter. You say in your question that your variables are in a file variables.tf which means the terraform plan command will not automatically load that file. FIX: rename variables.tf to variables.tfvars
When using the -var parameter, you should ensure that what you are passing into it will be properly interpreted by HCL. If the variable you are trying to pass in is a map, then it needs to be parse-able as a map.
Instead of terraform plan -var 'MyAmi=xxxx' I would expect something more like terraform plan -var 'MyAmi={"us-east-1":"ami-123", "us-east-2":"ami-456"}'.
See this documentation for more on declaring variables and specifically passing them in via the command line.
I had the same issue, but my problem was the missing quotes around default value of the variable
variable "environment_name" {
description = "Enter Environment name"
default= test
}
This is how I resolved this issues,
variable "environment_name" {
description = "Enter Environment name"
default= "test"
}
Check the terraform version.
I had something similar , the module was written on version 1.0 and I was using terraform version 0.12.
I had this error on Terraform when trying to pass a list into the module including my Data source:
The given value is not suitable for module. ...
In my case I was passing the wrong thing to the module:
security_groups_allow_to_msk_on_port_2181 = concat(var.security_groups_allow_to_msk_2181, [data.aws_security_group.client-vpn-sg])
It expected the id only and not the whole object. So instead this worked for me:
security_groups_allow_to_msk_on_port_2181 = concat(var.security_groups_allow_to_msk_2181, [data.aws_security_group.client-vpn-sg.id])
Also be sure what type of object you are receiving: is it a list? watch out for the types. I had the same error message when the first argument was also enclosed in [] (brackets), since it already was a list.