Consider the following Terraform template:
terraform {
required_providers {
azurecaf = {
source = "aztfmod/azurecaf"
version = "1.2.23"
}
}
}
provider "azurerm" {
features {}
}
resource "azurecaf_name" "rg_name" {
name = var.appname
resource_type = "azurerm_resource_group"
prefixes = ["dev"]
suffixes = ["y", "z"]
random_length = 5 // <------ random part in name generation
clean_input = false
}
resource "azurerm_resource_group" "example" {
name = azurecaf_name.rg_name.result
location = var.resource_group_location
}
I applied this template, than ran terraform plan. Terraform plan tells me
No changes. Your infrastructure matches the configuration.
How does that work as azurecaf_name.rg_name contains random characters? I would have expected it to create a new resource group with a new (random) name. I know that Terraform keeps a state, but doesn't it execute the template every time (=new random name) and then checks if that matches state and real resources in cloud?
The random characters are stored in the state file along with the other attributes of the resource like random_length, prefixes, suffixes. When you rerun terraform it will first check have any of the attributes in your configuration changed from whats in the state file. In your case all the values like random_length are the same. So terraform knows it doesnt need to regenerate anything like the random attributes since your configuration has not changed. Terraform will then check the state of the resource in the remote provider using the resource ID generated by the remote provider. If it doesnt exist terraform will create it with the same data. if it does exist but its attributes dont match then it will update the resource. if it checks its values there match whats in the state file. If they are the same then terraform knows no changes are needed.
We can see this in the random_pet resource
resource "random_pet" "foo" {
length = 3
prefix = "foo"
}
State file
"resources": [
{
"mode": "managed",
"type": "random_pet",
"name": "foo",
"provider": "provider[\"registry.terraform.io/hashicorp/random\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"id": "foo-evidently-secure-killdeer",
"keepers": null,
"length": 3,
"prefix": "foo",
"separator": "-"
},
"sensitive_attributes": []
}
]
}
],
Normally resources that have a form of randomness
Related
I am having problems with this issue for to long. I am trying to set up some Infrastructure with Terraform and Cisco ACI. What I want to do now is to set up multiple epgs for one tenant. So with for_each iam iterating over my JSON. But I don't get it how I can iterate over the epg key (which contains a list of strings). It can't be that hard! But I don't get it.
First of all my .json
{
"tenants": [
{
"id": 1,
"name": "kde0815",
"bd": "bd0815",
"vRF": "vrf0815",
"epg": [
"epg1"
],
"_fwdCtrl": "disabled",
"_isAttrBasedEPg": "no",
"_matchT": "no",
"_prefGrMemb": "unenforced"
},
{
"id": 97,
"name": "kde0816",
"bd": [
"bd0816"
],
"vRF": "vrf0816",
"epg": [
"epg1",
"epg2,
"epg3,
"epg4"
],
"_fwdCtrl": "disabled",
"_isAttrBasedEPg": "no",
"_matchT": "no",
"_prefGrMemb": "unenforced"
}
]
}
my resources.tf
locals {
# get json
user_data = jsondecode(file("./data/aci-data_test.json"))
# get all users
all_users = [for tenants in local.user_data.tenants : tenants.name]
}
resource "aci_application_epg" "epgLocalName" {
for_each = { for inst in local.user_data.tenants : inst.id => inst }
relation_fv_rs_bd = aci_bridge_domain.bdLocalName[each.value.id].id
application_profile_dn = aci_application_profile.apLocalName[each.value.id].id
# dynamic "name" {
# for_each = each.value.epg
# content {
# name = name.value
# }
#}
}
of course this is not all of my code. I´ve created Tenants, Bridgedomains and so on already.
I tried to use the dynamic block to go over the "epg" two errors occurred
Error: Missing required argument
│ The argument "name" is required, but no definition was found
Error: Unsupported block type
│ Blocks of type "name" are not expected here.
so I tired to use a second for_each loop
Error: Attribute redefined: The argument "for_each" was already set at resources.tf:54,3-11. Each argument may be set only once.
so far I understood you use for-loops only for modifying/filtering... strings. Is there a way just to use the for loop to pass the string to the variable "name" in the aci_applicaton_profile?
I am really stuck with Terraform here... never had problems with Python doing this.
So if you have any idea I would really appreciate it.
You have to flatten your user_data. For example:
locals {
flat_all_users = merge([ for id, inst in local.user_data.tenants:
{
for epg in inst.epg:
"${id}-${epg}" => {
id = id
name = epg
}
}
]...)
}
then
resource "aci_application_epg" "epgLocalName" {
for_each = localflat_all_users
relation_fv_rs_bd = aci_bridge_domain.bdLocalName[each.value.id].id
application_profile_dn = aci_application_profile.apLocalName[each.value.id].id
name = each.value.name
}
I am attempting to use terraform and embedded ARM templates to permit creating a simple logic app in Azure. I have the resource block in terraform as:
resource "azurerm_resource_group_template_deployment" "templateTEST" {
name = "arm-Deployment"
resource_group_name = azurerm_resource_group.rg.name
deployment_mode = "Incremental"
template_content = file("${path.module}/arm/createLogicAppsTEST.json")
parameters_content = jsonencode({
logic_app_name = { value = "logic-${var.prefix}" }
})
}
and the createLogicAppsTEST.json file is defined (just the top few lines as)
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"logic_app_name": {
"defaultValue": "tsa-logic-dgtlbi-stage-001",
"type": "string"
}
},
"variables": {},
"resources": [
{
....
When deploying and running the first time, ie. creating the logic app resource using terraform and the embedded ARM template, it will create passing the name correctly given the:
parameters_content = jsonencode({
logic_app_name = { value = "logic-${var.prefix}" }
})
however, if I ever run again, terraform appears to ignore the parameters I am passing and goes with the default from the ARM template as:
"logic_app_name": {
"defaultValue": "tsa-logic-dgtlbi-stage-001",
"type": "string"
}
I have updated to the latest version of both terraform (0.14.2) and azurerm (2.40.0), yet the issue persists. At present, this kind of makes ARM in terraform problematic given different tiers, dev, test and prod at my company have different prefixes and names ie. prod-, dev-.
Is there a setting to make terraform actually use the parameters I am passing with azurerm_resource_group_template_deployment resource block?
After my validation, you could use the ignore_changes field in the nested block lifecycle. It will tell terraform ignore when planning updates to the associated remote object.
For example,
resource "azurerm_resource_group_template_deployment" "templateTEST" {
name = "arm-Deployment"
resource_group_name = azurerm_resource_group.rg.name
deployment_mode = "Incremental"
template_content = file("${path.module}/arm/createLogicAppsTEST.json")
parameters_content = jsonencode({
logic_app_name = { value = "logic-${var.prefix}" }
})
lifecycle {
ignore_changes = [template_content,]
}
}
However, in this case, it would be better to declare empty parameters without default values in the embedded ARM templates instead you can pass the real parameters via the parameters_content.
For example, declare the parameter like this in the ARM template. This will always use the content of the external parameters.
"logic_app_name": {
"type": "string"
}
I elected to just use the old provider, there is actually an open bug report about this same issue on github
I have an EC2 instance which is manually created on aws. I need to run a bash script inside my instance using terraform without recreating the EC2 instance.this is my tf file for this .
instance.tf
resource "aws_key_pair" "mykey" {
key_name = "mykey"
public_key = file(var.PUBLIC_KEY)
}
resource "aws_instance" "example" {
key_name = aws_key_pair.mykey.key_name
provisioner "file" {
source="script.sh"
destination="/tmp/script.sh"
}
connection {
type ="ssh"
user ="ubuntu"
private_key=file(var.PRIVATE_KEY)
host = coalesce(self.public_ip, self.private_ip)
}
}
vars.tf
variable "INSTANCE_USERNAME" {
default = "ubuntu"
}
variable "PUBLIC_KEY" {
default = "mykey.pub"
}
variable "PRIVATE_KEY" {
default ="mykey"
}
variable "AMIS" {}
variable "INSTANCE_TYPE" {}
provider.tf
provider "aws" {
access_key = "sd**********"
secret_key = "kubsd**********"
region = "us-east-2"
}
I have imported my current state using
terraform import aws_instance.example instance-id
This is my state file
{
"version": 4,
"terraform_version": "0.12.17",
"serial": 1,
"lineage": "54385313-09b6-bc71-7c9c-a3d82d1f7d2f",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "aws_instance",
"name": "example",
"provider": "provider.aws",
"instances": [
{
"schema_version": 1,
"attributes": {
"ami": "ami-0d5d9d301c853a04a",
"arn": "arn:aws:ec2:us-east-2:148602461879:instance/i-054caec795bbbdf2d",
"associate_public_ip_address": true,
"availability_zone": "us-east-2c",
"cpu_core_count": 1,
"cpu_threads_per_core": 1,
"credit_specification": [
{
"cpu_credits": "standard"
}
],
continues...
But when i run terraform plan it is showing error like
Error: Missing required argument
on instance.tf line 5, in resource "aws_instance" "example":
5: resource "aws_instance" "example" {
The argument "ami" is required, but no definition was found.
Error: Missing required argument
on instance.tf line 5, in resource "aws_instance" "example":
5: resource "aws_instance" "example" {
The argument "instance_type" is required, but no definition was found.
I couldn't understand why it is asking for instance_type and ami . Its is present inside the terraform.tf state after importing my state . Do i need to pass this data manually ? Is there any way to automate this process?
The terraform import command exists to allow you to bypass the usual requirement that Terraform must be the one to create a remote object representing each resource in your configuration. You should think of it only as a way to tell Terraform that your resource "aws_instance" "example" block is connected to the remote object instance-id (using the placeholder you showed in your example). You still need to tell Terraform the desired configuration for that object, because part of Terraform's role is to notice when your configuration disagrees with the remote objects (via the state) and make a plan to correct it.
Your situation here is a good example of why Terraform doesn't just automatically write out the configuration for you in terraform import: you seem to want to set these arguments from input variables, but Terraform has no way to know that unless you write that out in the configuration:
resource "aws_instance" "example" {
# I'm not sure how you wanted to map the list of images to one
# here, so I'm just using the first element for example.
ami = var.AMIS[0]
instance_type = var.INSTANCE_TYPE
key_name = aws_key_pair.mykey.key_name
}
By writing that out explicitly, Terraform can see where the values for those arguments are supposed to come from. (Note that idiomatic Terraform style is for variables to have lower-case names like instance_type, not uppercase names like INSTANCE_TYPE.)
There is no way to use existing values from the state to populate the configuration because that is the reverse direction of Terraform's flow: creating a Terraform plan compares the configuration to the state and detects when the configuration is different, and then generates actions that will make the remote objects match the configuration. Using the values in the state to populate the configuration would defeat the object, because Terraform would never be able to detect any differences.
I am trying to get an array of arrays to use in a Terraform template_file data field:
data "template_file" "dashboard" {
template = "${file("${path.module}/files/dashboard.json")}"
vars {
metrics = "${jsonencode(local.metrics)}"
}
}
But I am not finding the proper way to get what I want. I have an aws_instance resource with a count of 3, and I am trying to generate 3 arrays inside a local based on each one of the resource counts. The only thing I've come up with so far is:
locals {
metrics = [
"collectd", "GenericJMX.gauge.50thPercentile", "Host", "${aws_instance.instance.*.id}", "PluginInstance", "cassandra_client_request-latency"
]
}
Obviously what this does, is put all the instances one after the other in the same array. What I am trying to achieve is a result array that would look like:
["collectd", "GenericJMX.gauge.50thPercentile", "Host", "the id of instance 0", PluginInstance", "cassandra_client_request-latency"],
["collectd", "GenericJMX.gauge.50thPercentile", "Host", "the id of instance 1", PluginInstance", "cassandra_client_request-latency"],
["collectd", "GenericJMX.gauge.50thPercentile", "Host", "the id of instance 3", PluginInstance", "cassandra_client_request-latency"]
And this would be expanded in the template ${metrics} variable.
Is there any way to achieve what I want, inside a local, and make it usable in the template?
terraform data source supports count as well.
It is a hide feature, and never be documented (https://github.com/hashicorp/terraform/pull/8635)
Do some adjustments on your dashboard.json, then use below codes to generate number of template_file data source resources.
data "template_file" "dashboard" {
count = "${length(aws_instance.instance.*.id)}"
template = "${file("${path.module}/files/dashboard.json")}"
vars {
metrics = "${element(aws_instance.instance.*.id, count.index)}"
}
}
You can reference it as terraform count resources
count = "${length(aws_instance.instance.*.id)}"
${data.template_file.dashboard.*.rendered[count.index]}"
Here are the full test data.
$ cat main.tf
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "instance" {
count = 2
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
data "template_file" "dashboard" {
count = "${length(aws_instance.instance.*.id)}"
template = "${file("${path.module}/files/dashboard.json")}"
vars {
metric = "${element(aws_instance.instance.*.id, count.index)}"
}
}
output "aws_instances" {
value = "${length(aws_instance.instance.*.id)}"
}
$ cat files/dashboard.json
["collectd", "GenericJMX.gauge.50thPercentile", "Host", "${metric}", PluginInstance", "cassandra_client_request-latency"]
After you apply the change, check the tfstate file, the data sources are
data.template_file.dashboard.0
data.template_file.dashboard.1
Sample tfstate:
"data.template_file.dashboard.1": {
"type": "template_file",
"depends_on": [
"aws_instance.instance.*"
],
"primary": {
"id": "8e05e7c115a8d482b9622a1eddf5ee1701b8cc4695da5ab9591899df5aeb703d",
"attributes": {
"id": "8e05e7c115a8d482b9622a1eddf5ee1701b8cc4695da5ab9591899df5aeb703d",
# the date is here ==> "rendered": "[\"collectd\", \"GenericJMX.gauge.50thPercentile\", \"Host\", \"i-015961b744ff55da4\", PluginInstance\", \"cassandra_client_request-latency\"]\n",
"template": "[\"collectd\", \"GenericJMX.gauge.50thPercentile\", \"Host\", \"${metric}\", PluginInstance\", \"cassandra_client_request-latency\"]\n",
"vars.%": "1",
"vars.metric": "i-015961b744ff55da4"
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": "provider.template"
}
I want to create a new WebApp resource into existing resource group.
this question and this post explains how we can import existing resource ( instead of creating new one every time)
I was able to import my existing resource group using below command
terraform import azurerm_resource_group.rg-myResourceGroup /subscriptions/00000-my-subscription-id-0000000/resourceGroups/rg-myResourceGroup
After executing this command I can see new file is created named 'terraform.tfstate' Below is content of the file.
{
"version": 3,
"terraform_version": "0.11.11",
"serial": 1,
"lineage": "-----------------------------",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {
"azurerm_resource_group.rg-ResourceGroupName": {
"type": "azurerm_resource_group",
"depends_on": [],
"primary": {
"id": "/subscriptions/subscription-id-00000000000/resourceGroups/rg-hemant",
"attributes": {
"id": "/subscriptions/subscription-id-00000000000/resourceGroups/rg-hemant",
"location": "australiaeast",
"name": "rg-ResourceGroupName",
"tags.%": "0"
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": "provider.azurerm"
}
},
"depends_on": []
}
]
}
Now my question is how can I access/refer/include terraform.tfstate in my main.tf
resource "azurerm_resource_group" "rg-hemant" {
#name = it should be rg-ResourceGroupName
#location = it should be australiaeast
}
UPDATE 1
Assume that in my subscription 'mysubscription1' there is a
resource group 'rg-exising'
This resource group already have few resources e.g. webapp1 ,
storageaccount1
Now I want to write a terraform script which will add new
resource ( e.g. newWebapp1 ) to existing resource group
'rg-existing'
so after terraform apply operation rg-exising should have
below resources
webapp1
storageaccount1
newWebapp1 ( added by new terraform apply script )
4) Note that I don't want terraform to create ( in case of apply ) OR delete ( in case of destroy ) my existing resources which belongs to rg-exising
you dont really, you just need to map your resource to the state in tfstate, so just do:
resource "azurerm_resource_group" "rg-hemant" {
name = 'rg-ResourceGroupName'
location = 'australiaeast'
}
and tf should recognize this resource as the one you have in the state file
Dug more through the posts and found a solution here.
We can use additional parameters to terraform destroy to specifically mentioned which resource we want to destroy
terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME
Note: What I have learnt is that, in this case there is no need to use terraform import command