I downloaded the helm-config.yaml using the following terraform block
resource "null_resource" "get-helm-config" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml"
}
depends_on = [null_resource.set-kube-config]
}
Now, I have to replace the below placeholders are in the above helm-config.yaml
verbosityLevel: 3
appgw:
subscriptionId: <subscriptionId>
resourceGroup: <resourceGroupName>
name: <applicationGatewayName>
usePrivateIP: false
shared: false
armAuth:
type: aadPodIdentity
identityResourceID: <identityResourceId>
identityClientID: <identityClientId>
rbac:
enabled: true # true/false
and I have implemented something like below
resource "local_file" "update-helm-config" {
content = replace(
replace(
replace(
replace(
replace(
"${file("helm-config.yaml")}",
"<subscriptionId>",
data.azurerm_client_config.current.subscription_id
),
"<resourceGroupName>",
azurerm_resource_group.rg.name
),
"<applicationGatewayName>",
azurerm_application_gateway.network.name
),
"<identityResourceId>",
azurerm_user_assigned_identity.testIdentity.id
),
"<identityClientId>",
azurerm_user_assigned_identity.testIdentity.client_id
)
filename = "helm-config.yaml"
depends_on = ["null_resource.get-helm-config"]
}
Of course, that's not very readable or maintainable.
Related
I am trying to achieve the following:
Using the lifecycle command to ignore tags being applied to resources by Azure Policy.
Background
I have a terraform template that applies tags to the resource group but the resources in the same template do not have tags applied. Instead, I have an Azure Policy that enforced inheritance of tags from the resource groups.
When I make any changes to the template and run terraform plan I get a load of changes occur which state they will change the tags from values to null. This isn't causing any issue as such; it just bloats my terraform plan with unnecessary changes.
Issue
I have tried using the lifecycle command to says ignore changes and set the value to tags, however it doesn't seem to work, and the plan still shows the tags are going to be removed.
Below is an example of a resource that says the tags will be removed if a change occurs.
Example Code
resource "azurerm_virtual_machine_extension" "ext_ade" {
depends_on = [azurerm_virtual_machine_extension.ext_domain_join, azurerm_virtual_machine_extension.ext_dsc]
count = var.session_hosts.quantity
name = var.ext_ade.name
virtual_machine_id = azurerm_windows_virtual_machine.vm.*.id[count.index]
publisher = "Microsoft.Azure.Security"
type = "AzureDiskEncryption"
type_handler_version = "2.2"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"EncryptionOperation": "EnableEncryption",
"KeyVaultURL": "${data.azurerm_key_vault.key_vault.vault_uri}",
"KeyVaultResourceId": "${data.azurerm_key_vault.key_vault.id}",
"KeyEncryptionKeyURL": "${azurerm_key_vault_key.ade_key.*.id[count.index]}",
"KekVaultResourceId": "${data.azurerm_key_vault.key_vault.id}",
"KeyEncryptionAlgorithm": "RSA-OAEP",
"VolumeType": "All"
}
SETTINGS
lifecycle {
ignore_changes = [settings,tags]
}
}
Tagging using Azure Policy and Terraform templates ignoring tags
I've tried in my environment and was able to deploy it successfully using lifecycle command.
Taken a snippet of Terraform from SO solution given by #Jim Xu and modified it to meet your requirements as shown below:
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.99.0"
}
}
}
provider "azurerm" {
features {}
}
resource "random_string" "password" {
length = 16
special = false
}
data "azurerm_resource_group" "newtest" {
name = var.resource_group_name
}
resource "azurerm_key_vault" "keyvault" {
name = var.key_vault_name
resource_group_name = var.resource_group_name
enabled_for_disk_encryption = true
enabled_for_deployment=true
enabled_for_template_deployment =true
location=data.azurerm_resource_group.newtest.location
tenant_id = "<tenant-id>"
sku_name = "standard"
soft_delete_retention_days=90
}
resource "azurerm_key_vault_access_policy" "myPolicy" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = "<tenant-id>"
object_id = "<object-id>"
key_permissions = [
"Create",
"Delete",
"Get",
"Purge",
"Recover",
"Update",
"List",
"Decrypt",
"Sign"
]
}
resource "azurerm_key_vault_key" "testKEK" {
name = "testKEK"
key_vault_id = azurerm_key_vault.keyvault.id
key_type = "RSA"
key_size = 2048
depends_on = [
azurerm_key_vault_access_policy.myPolicy
]
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey",
]
}
resource "azurerm_virtual_machine_extension" "vmextension" {
name = random_string.password.result
virtual_machine_id = "/subscriptions/<subscription_ID>/resourceGroups/<resourceGroup>/providers/Microsoft.Compute/virtualMachines/<VMName>"
publisher = "Microsoft.Azure.Security"
type = "AzureDiskEncryption"
type_handler_version = var.type_handler_version
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"EncryptionOperation": "${var.encrypt_operation}",
"KeyVaultURL": "${azurerm_key_vault.keyvault.vault_uri}",
"KeyVaultResourceId": "${azurerm_key_vault.keyvault.id}",
"KeyEncryptionKeyURL": "${azurerm_key_vault_key.testKEK.id}",
"KekVaultResourceId": "${azurerm_key_vault.keyvault.id}",
"KeyEncryptionAlgorithm": "${var.encryption_algorithm}",
"VolumeType": "${var.volume_type}"
}
SETTINGS
lifecycle {
ignore_changes = [settings,tags]
}
}
variable.tf:
variable "resource_group_name" {
default = "newtest"
}
variable "location" {
default = "EastUS"
}
variable key_vault_name {
default = ""
}
variable virtual_machine_id {
default = ""
}
variable "volume_type" {
default = "All"
}
variable "encrypt_operation" {
default = "EnableEncryption"
}
variable "type_handler_version" {
description = "Defaults to 2.2 on Windows"
default = "2.2"
}
Note: you can modify the tfvars file to suit your needs.
Executed terraform init or terraform init -upgrade:
Executed terraform plan:
Give terraform apply after running the above commands successfully:
Created and observed no changes in keyvault from Portal:
ResourceGroup(newtest) deployed in Portal:
I try to create groups and projects in gitlab with terraform. Later I want to create a new resource "project_membership" to give the new projects the groups as members. Therefore I need the Group_ID and the Project ID.
If I create all resources in own resource blocks, I can refer to it with ${gitlab_group.NAME_OF_THE_RESSOURCE.id} but when I use for_each I do not specify a name. How I can get the group and project IDs?
terraform {
backend "http" {
}
required_providers {
gitlab = {
source = "gitlabhq/gitlab"
version = "~> 3.1"
}
}
}
locals {
services_project_names = toset( [
"values",
"releases",
"design",
"execution"
] )
permission_project_names = toset( [
"vendor",
"maintainer"
] )
}
################################
#### permissions repository ####
################################
resource "gitlab_group" "permissions_group" {
name = var.service_name
path = var.service_name
parent_id = var.permissions_parent_grp
visibility_level = "private"
}
resource "gitlab_group" "permissions_sub_group" {
for_each = local.permission_project_names
name = "${var.service_name}_${each.key}"
path = "${var.service_name}_${each.key}"
parent_id = "${gitlab_group.permissions_group.id}"
visibility_level = "private"
}
############################
#### service repository ####
############################
resource "gitlab_group" "services_group" {
name = var.service_name
path = var.service_name
parent_id = var.services_parent_grp
visibility_level = "private"
}
resource "gitlab_project" "services_projects" {
for_each = local.services_project_names
name = "${var.service_name}_${each.key}"
default_branch = "main"
description = ""
issues_enabled = false
merge_requests_enabled = false
namespace_id = "${gitlab_group.services_group.id}"
snippets_enabled = false
visibility_level = "private"
wiki_enabled = false
}
You can reference individual values created via for_each by using the key that was used in the for_each. In your case here are some examplels:
"${gitlab_project.services_projects["values"]}"
"${gitlab_project.services_projects["releases"]}"
To get the ID of a specific project:
gitlab_project.services_projects["values"].id
I used below terrafrom code to create AWS EC2 instance,
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
security_groups = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}
after 5 minutes with no change in the code when I try to run terraform plan. It shows something changed outside of Terraform, its trying destroy and re-create the Ec2 instance. Why is this happening?
How to prevent this?
aws_instance.example: Refreshing state... [id=i-0aa279957d1287100]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# aws_instance.example has been changed
~ resource "aws_instance" "example" {
id = "i-0aa279957d1287100"
~ security_groups = [
- "sg-082d39ed218fc0f2e",
]
tags = {
"Environment" = "dev"
"Name" = "ec2linux"
}
# (26 unchanged attributes hidden)
~ root_block_device {
+ tags = {}
# (9 unchanged attributes hidden)
}
# (4 unchanged blocks hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
changes.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
adding image:
You must use vpc_security_group_ids instead of security_groups
resource "aws_instance" "example" {
ami = var.ami-id
instance_type = var.ec2_type
key_name = var.keyname
subnet_id = "subnet-05a63e5c1a6bcb7ac"
vpc_security_group_ids = ["sg-082d39ed218fc0f2e"]
# root disk
root_block_device {
volume_size = "10"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = var.instance_name
Environment = "dev"
}
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
}
}
When running terraform apply against the following it keeps asking me for variable input on the CLI instead of accepting from the file, if I remove the variables from the .tf file and just leave the first one in for the ami it works with some massaging. Any ideas?
contents of dev.tf:
variable "aws_region" {}
variable "instance_type" {}
variable "key_name" {}
variable "vpc_security_group_ids" {}
variable "subnet_id" {}
variable "iam_instance_profile" {}
variable "tag_env" {}
provider "aws" {
region = "${var.aws_region}"
}
data "aws_ami" "amazon_linux" {
most_recent = true
filter {
name = "name"
values = [
"amzn-ami-hvm-*-x86_64-gp2",
]
}
filter {
name = "owner-alias"
values = [
"amazon",
]
}
}
resource "aws_instance" "kafka" {
ami = "${data.aws_ami.amazon_linux.id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.subnet_id}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${var.vpc_security_group_ids}"]
iam_instance_profile = "${var.iam_instance_profile}"
user_data = <<-EOF
#!/bin/bash
sudo yum -y install telnet
EOF
tags {
ProductCode = "id"
InventoryCode = "id"
Environment = "${var.tag_env}"
}
}
contents of dev.tfvars:
aws_region = "us-east-1"
tag_env = "dev"
instance_type = "t2.large"
subnet_id = "subnet-id"
vpc_security_group_ids = "sg-id , sg-id"
key_name = "id"
iam_instance_profile = "id"
Ah good catch, changed the filename to terraform.tfvars and it now works.
How i can do this right?
variable "vault_tag_name" {}
variable "vault_tag_value" {}
resource "aws_instance" "instance" {
tags {
Name = "${var.name}"
Group = "${var.group_tag}"
"${var.vault_tag_name}" = "${var.vault_tag_value}"
}
}
I have no errors from terraform, but result is wrong
tags.${var.vault_tag_name}: ""
tags.%: "3"
tags.Group: "test-dev"
tags.Name: "test-dev"
According to this comment, dynamic variable names are not possible at this time in HCL.
You can use zipmap to emulate this, though it's a bit of a clunky workaround;
locals {
ec2_tag_keys = ["Name", "Group", "${var.vault_tag_name}"]
ec2_tag_vals = ["${var.name}", "${var.group_tag}", "${var.vault_tag_value}"]
}
resource "aws_instance", "instance" {
...
tags = "${zipmap(local.ec2_tag_keys, local.ec2_tag_vals)}"
}
Result;
+ aws_instance.instance
tags.%: "3"
tags.Group: "MyGroup"
tags.Name: "MyName"
tags.MyTagName: "MyTagValue"