use timestamp for null resource local exec - terraform

I want to perform the exec operation only once per hour. Meaning, if it's now 12 then don't exec again until it's 13 o'clock.
The timestamp in combination with the fomatdate will result in timestamps that only differ every hour.
resource "null_resource" "helm_login" {
triggers = {
hour = formatdate("YYYYMMDDhh", timestamp())
}
provisioner "local-exec" {
command = <<-EOF
az acr login -n ${var.helm_chart_acr_fqdn} -t -o tsv --query accessToken \
| helm registry login ${var.helm_chart_acr_fqdn} \
-u "00000000-0000-0000-0000-000000000000" \
--password-stdin
EOF
}
The problem is that terraform reports that this value will be only known after appy and always wants to recreate the resource.
# module.k8s.null_resource.helm_login must be replaced
-/+ resource "null_resource" "helm_login" {
~ id = "4503742218368236410" -> (known after apply)
~ triggers = {
- "hour" = "2021112010"
} -> (known after apply) # forces replacement
}
I have observed similar issues where values are fetched from data and passed to resources on creation, forcing me to not use those data values but hard code them.

As you just find out terraform evaluates the timestamp function at runtime,
that is why we see the: (known after apply) # forces replacement
But we can do something about that to meet your goal, we can pass the hour as a parameter:
variable "hour" {
type = number
}
resource "null_resource" "test" {
triggers = {
hour = var.hour
}
provisioner "local-exec" {
command = "echo 'test'"
}
}
Then to call terraform we do:
hour=$(date +%G%m%d%H); sudo terraform apply -var="hour=$hour"
First run:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.test will be created
+ resource "null_resource" "test" {
+ id = (known after apply)
+ triggers = {
+ "hour" = "2021112011"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
null_resource.test: Creating...
null_resource.test: Provisioning with 'local-exec'...
null_resource.test (local-exec): Executing: ["/bin/sh" "-c" "echo 'test'"]
null_resource.test (local-exec): test
null_resource.test: Creation complete after 0s [id=6793564729560967989]
Second run:
null_resource.test: Refreshing state... [id=6793564729560967989]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Related

Terraform - how To make the 'local_file' resource to be recreated with every 'terraform apply'

I have a local_file resource in my terraform configuration,
the problem is how to tell terraform that i want this resource to be recreated every time my client will run 'terraform apply' even if nothing changed in the resource itself, how can i make this possible?
local_file resource cant do what i want
I am using triggers to do so in a null_resource but there is no such option in local_file resource.
null_resource does what i want
Instead of local provider try template provider and create the file using null resource.So that trigger will take care of recreating the file.Tested like below
template = file("${path.module}/templates/inventory.tpl")
vars = {
bastion_host = local.Mongo_Bastion_host
key_file = var.private_key_file_path
}
}
resource "null_resource" "copy-inventory" {
triggers = {
ip = local.random_id
}
provisioner "local-exec" {
command = "echo ${data.template_file.inventory_cfg.rendered} >>inventory"
}
}
After terraform apply i reran terraform plan and able to see it is creating thefile again
Terraform will perform the following actions:
# null_resource.copy-inventory must be replaced
-/+ resource "null_resource" "copy-inventory" {
~ id = "7011189362963809116" -> (known after apply)
~ triggers = {
- "ip" = "f9f0f771-42ae-176e-3722-5b342665dea2"
} -> (known after apply) # forces replacement
}
Plan: 1 to add, 0 to change, 1 to destroy.

Terraform-How to configure lifecycle policy for existing storage account

I have a storage account created in azure portal(out side of terraform). I want to configure lifecycle management policy to delete older blob. I have tried terraform import to import the resource(storage account), but seems settings are different terraform plan, when I run terraform plan it say, it will replace or create storage account.
But I dont want to recreate the storage account which has some date in it.
provider "azurerm" {
features {}
skip_provider_registration = "true"
}
variable "LOCATION" {
default = "northeurope"
description = "Region to deploy into"
}
variable "RESOURCE_GROUP" {
default = "[RETRACTED]" # The value is same in azure portal
description = "Name of the resource group"
}
variable "STORAGE_ACCOUNT" {
default = "[RETRACTED]" # The value is same in azure portal
description = "Name of the storage account where to store the backup"
}
variable "STORAGE_ACCOUNT_RETENTION_DAYS" {
default = "180"
description = "Number of days to keep the backups"
}
resource "azurerm_resource_group" "storage-account" {
name = var.RESOURCE_GROUP
location = var.LOCATION
}
resource "azurerm_storage_account" "storage-account-lifecycle" {
name = var.STORAGE_ACCOUNT
location = azurerm_resource_group.storage-account.location
resource_group_name = azurerm_resource_group.storage-account.name
account_tier = "Standard"
account_replication_type = "RAGRS" #Read-access geo-redundant storage
}
resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
storage_account_id = azurerm_storage_account.storage-account-lifecycle.id
rule {
name = "DeleteOldBackups"
enabled = true
filters {
blob_types = ["blockBlob"]
}
actions {
base_blob {
delete_after_days_since_modification_greater_than = var.STORAGE_ACCOUNT_RETENTION_DAYS
}
}
}
}
Import resource
$ terraform import azurerm_storage_account.storage-account-lifecycle /subscriptions/[RETRACTED]
azurerm_storage_account.storage-account-lifecycle: Importing from ID "/subscriptions/[RETRACTED]...
azurerm_storage_account.storage-account-lifecycle: Import prepared!
Prepared azurerm_storage_account for import
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
The plan is below
$ terraform plan
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# azurerm_resource_group.storage-account will be created
+ resource "azurerm_resource_group" "storage-account" {
+ id = (known after apply)
+ location = "northeurope"
+ name = "[RETRACTED]"
}
# azurerm_storage_management_policy.storage-account-lifecycle-management-policy will be created
+ resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
+ id = (known after apply)
+ storage_account_id = "/subscriptions/[RETRACTED]"
+ rule {
+ enabled = true
+ name = "DeleteOldBackups"
+ actions {
+ base_blob {
+ delete_after_days_since_modification_greater_than = 180
}
}
+ filters {
+ blob_types = [
+ "blockBlob",
]
}
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform
apply" now.
From the plan, I see it will create "storage account". I also tried removing azurerm_storage_account section and specified resource id for the var storage_account_id in azurerm_storage_management_policy section, but still it is saying # azurerm_resource_group.storage-account will be created.
How to configure lifecycle management policy without modifying/creating existing storage account.
PS: This is my first terraform script
Ok, I see the problem as #Jim Xu pointed in the comment. I didn't import resource group which is what it is saying. I imported resource group like and ran terraform plan
$ terraform import azurerm_resource_group.storage-account /subscriptions/[RETRACTED]
$ $ terraform plan
azurerm_resource_group.storage-account: Refreshing state... [id=/subscriptions/[RETRACTED]]
azurerm_storage_account.storage-account-lifecycle: Refreshing state... [id=/subscriptions/[RETRACTED]]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
+ create
Terraform will perform the following actions:
# azurerm_storage_management_policy.storage-account-lifecycle-management-policy will be created
+ resource "azurerm_storage_management_policy" "storage-account-lifecycle-management-policy" {
+ id = (known after apply)
+ storage_account_id = "/subscriptions/[RETRACTED]"
+ rule {
+ enabled = true
+ name = "DeleteOldBackups"
+ actions {
+ base_blob {
+ delete_after_days_since_modification_greater_than = 180
}
}
+ filters {
+ blob_types = [
+ "blockBlob",
]
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.

How to create azurerm_resourcegroup through terraform only when it does not exist in Azure?

I want my terraform script to create the resource group only when it does not exist in Azure, otherwise it should skip the creation of resource group.
Well, you can use Terraform external to execute the CLI command to check if the resource group exists or not. And then use the result to determine whether the resource group will create. Here is an example:
./main.tf
provider "azurerm" {
features {}
}
variable "group_name" {}
variable "location" {
default = "East Asia"
}
data "external" "example" {
program = ["/bin/bash","./script.sh"]
query = {
group_name = var.group_name
}
}
resource "azurerm_resource_group" "example" {
count = data.external.example.result.exists == "true" ? 0 : 1
name = var.group_name
location = var.location
}
./script.sh
#!/bin/bash
eval "$(jq -r '#sh "GROUP_NAME=\(.group_name)"')"
result=$(az group exists -n $GROUP_NAME)
jq -n --arg exists "$result" '{"exists":$exists}'
Terraform is declarative, not imperative. When using Terraform you shouldn't need to check for existing resources
to validate your tf script
terraform plan
and to apply the tf script changes
terraform apply
This will validate the resources if it already exists and create if not

Terraform using count with list inside list variable?

I am trying to create few storage account and some containers in each account. I need to create this as a module so that I can reuse it. The way that I am thinking to do this is by creating a variable such as
storageaccounts = [
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
]
I've created the following code. However, I think this line
count = length(var.storageaccounts.*.containers)
is giving me error. I want to loop through the storageaccount array, get the containers and assign the 'length' of the containers key to the 'count' inside the 'azurerm_storage_container' so that this block creates multiple storage account.
However, this doesn't work as expected, most likely because of *
I also tested with
count = length(var.storageaccounts[count.index].containers)
when I do this, I get the error
on ..\modules\storage\main.tf line 21, in resource "azurerm_storage_container" "this":
21: count = length(var.storageaccounts[count.index].containers)
The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set.
How can I accomplish this? Or is there any better way?
Here is the full code.
resource "random_id" "this" {
count = length(var.storageaccounts)
keepers = {
storagename = 1
}
byte_length = 6
prefix = var.storageaccounts[count.index].name
}
resource "azurerm_storage_account" "this" {
count = length(var.storageaccounts)
name = substr(lower(random_id.this[count.index].hex), 0, 24)
resource_group_name = var.resourcegroup
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "this" {
count = length(var.storageaccounts.*.containers)
name = var.storageaccounts[count.index].containers[count.index]
storage_account_name = azurerm_storage_account.this[count.index].name
container_access_type = "private"
}
provider "random" {
version = "2.2"
}
locals {
storageaccounts = [
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
]
}
module "storage" {
source = "../modules/storage"
resourcegroup = "my-test"
location = "eastus"
storageaccounts = local.storageaccounts
}
provider "azurerm" {
version = "=2.0.0"
features {}
}
//variable "prefix" {}
variable "location" {}
variable "resourcegroup" {}
variable "storageaccounts" {
default = []
type = list(object({
name = string
containers = list(string)
}))
}
count = length(var.storageaccounts.*.containers) will return the length of var.storageaccounts which is 2.
count = length(var.storageaccounts[count.index].containers) would fail because you can't reference something that hasn't been declared.
What you can do is flatten the lists.
For example:
variables.tf
variable "storageaccounts" {
default = []
type = list(object({
name = string
containers = list(string)
}))
}
main.tf
resource "null_resource" "cluster" {
count = length(flatten(var.storageaccounts.*.containers))
provisioner "local-exec" {
command = "echo ${flatten(var.storageaccounts.*.containers)[count.index]}"
}
}
variables.tfvars
storageaccounts = [
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
]
The plan
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.cluster[0] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[1] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[2] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[3] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[4] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
Plan: 5 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: /path/plan
To perform exactly these actions, run the following command to apply:
terraform apply "/path/plan"
The application
terraform apply
/outputs/basics/plan
null_resource.cluster[1]: Creating...
null_resource.cluster[4]: Creating...
null_resource.cluster[3]: Creating...
null_resource.cluster[0]: Creating...
null_resource.cluster[2]: Creating...
null_resource.cluster[3]: Provisioning with 'local-exec'...
null_resource.cluster[1]: Provisioning with 'local-exec'...
null_resource.cluster[4]: Provisioning with 'local-exec'...
null_resource.cluster[0]: Provisioning with 'local-exec'...
null_resource.cluster[2]: Provisioning with 'local-exec'...
null_resource.cluster[3] (local-exec): Executing: ["/bin/sh" "-c" "echo logs-1"]
null_resource.cluster[2] (local-exec): Executing: ["/bin/sh" "-c" "echo backups"]
null_resource.cluster[4] (local-exec): Executing: ["/bin/sh" "-c" "echo web-1"]
null_resource.cluster[1] (local-exec): Executing: ["/bin/sh" "-c" "echo web"]
null_resource.cluster[0] (local-exec): Executing: ["/bin/sh" "-c" "echo logs"]
null_resource.cluster[2] (local-exec): backups
null_resource.cluster[2]: Creation complete after 0s [id=3936346761857660500]
null_resource.cluster[4] (local-exec): web-1
null_resource.cluster[3] (local-exec): logs-1
null_resource.cluster[0] (local-exec): logs
null_resource.cluster[1] (local-exec): web
null_resource.cluster[4]: Creation complete after 0s [id=3473332636300628727]
null_resource.cluster[3]: Creation complete after 0s [id=8036538301397331156]
null_resource.cluster[1]: Creation complete after 0s [id=8566902439147392295]
null_resource.cluster[0]: Creation complete after 0s [id=6115664408585418236]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
length(var.storageaccounts.*.containers)
Your count doesn't make sense, you asking for a list of storageaccounts with the attribute containers. So it would be looking for
[
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
].containers
Try using a locals to merge all into one list:
locals{
storageaccounts = [for x in var.storageaccounts: x.containers]
} // Returns list of lists
Then
count = length(flatten(local.storageaccounts)) //all one big list
https://www.terraform.io/docs/configuration/functions/flatten.html
Sorry, not had a chance to test the code, but I hope this helps.

Terraform null resource execution order

The problem:
I'm trying to build a Docker Swarm cluster on Digital Ocean, consisting of 3 "manager" nodes and however many worker nodes. The number of worker nodes isn't particularly relevant for this question. I'm trying to module-ize the Docker Swarm provisioning stuff, so its not specifically coupled to the digitalocean provider, but instead can receive a list of ip addresses to act against provisioning the cluster.
In order to provision the master nodes, the first node needs to be put into swarm mode, which generates a join key that the other master nodes will use to join the first one. "null_resource"s are being used to execute remote provisioners against the master nodes, however, I cannot figure out how dafuq to make sure the first master node completes doing its stuff ("docker swarm init ..."), before having another "null_resource" provisioner execute against the other master nodes that need to join the first one. They all run in parallel and predictably, it doesn't work.
Further, trying to figure out how to collect the first node's generated join-token and make it available to the other nodes. I've considered doing this with Consul, and storing the join token as a key, and getting that key on the other nodes - but this isn't ideal as... there are still issues with ensuring the Consul cluster is provisioned and ready (so kind of the same problem).
main.tf
variable "master_count" { default = 3 }
# master nodes
resource "digitalocean_droplet" "master_nodes" {
count = "${var.master_count}"
... etc, etc
}
module "docker_master" {
source = "./docker/master"
private_ip = "${digitalocean_droplet.master_nodes.*.ipv4_address_private}"
public_ip = "${digitalocean_droplet.master_nodes.*.ipv4_address}"
instances = "${var.master_count}"
}
docker/master/main.tf
variable "instances" {}
variable "private_ip" { type = "list" }
variable "public_ip" { type = "list" }
# Act only on the first item in the list of masters...
resource "null_resource" "swarm_master" {
count = 1
# Just to ensure this gets run every time
triggers {
version = "${timestamp()}"
}
connection {
...
host = "${element(var.public_ip, 0)}"
}
provisioner "remote-exec" {
inline = [<<EOF
... install docker, then ...
docker swarm init --advertise-addr ${element(var.private_ip, 0)}
MANAGER_JOIN_TOKEN=$(docker swarm join-token manager -q)
# need to do something with the join token, like make it available
# as an attribute for interpolation in the next "null_resource" block
EOF
]
}
}
# Act on the other 2 swarm master nodes (*not* the first one)
resource "null_resource" "other_swarm_masters" {
count = "${var.instances - 1}"
triggers {
version = "${timestamp()}"
}
# Host key slices the 3-element IP list and excludes the first one
connection {
...
host = "${element(slice(var.public_ip, 1, length(var.public_ip)), count.index)}"
}
provisioner "remote-exec" {
inline = [<<EOF
SWARM_MASTER_JOIN_TOKEN=$(consul kv get docker/swarm/manager/join_token)
docker swarm join --token ??? ${element(var.private_ip, 0)}:2377
EOF
]
}
##### THIS IS THE MEAT OF THE QUESTION ###
# How do I make this "null_resource" block not run until the other one has
# completed and generated the swarm token output? depends_on doesn't
# seem to do it :(
}
From reading through github issues, I get the feeling this isn't an uncommon problem... but its kicking my ass. Any suggestions appreciated!
#victor-m's comment is correct. If you use a null_resource and have the following trigger on any former's property, then they will execute in order.
resource "null_resource" "first" {
provisioner "local-exec" {
command = "echo 'first' > newfile"
}
}
resource "null_resource" "second" {
triggers = {
order = null_resource.first.id
}
provisioner "local-exec" {
command = "echo 'second' >> newfile"
}
}
resource "null_resource" "third" {
triggers = {
order = null_resource.second.id
}
provisioner "local-exec" {
command = "echo 'third' >> newfile"
}
}
$ terraform apply
null_resource.first: Creating...
null_resource.first: Provisioning with 'local-exec'...
null_resource.first (local-exec): Executing: ["/bin/sh" "-c" "echo 'first' > newfile"]
null_resource.first: Creation complete after 0s [id=3107778766090269290]
null_resource.second: Creating...
null_resource.second: Provisioning with 'local-exec'...
null_resource.second (local-exec): Executing: ["/bin/sh" "-c" "echo 'second' >> newfile"]
null_resource.second: Creation complete after 0s [id=3159896803213063900]
null_resource.third: Creating...
null_resource.third: Provisioning with 'local-exec'...
null_resource.third (local-exec): Executing: ["/bin/sh" "-c" "echo 'third' >> newfile"]
null_resource.third: Creation complete after 0s [id=6959717123480445161]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
To make sure, cat the new file and here's the output as expected
$ cat newfile
first
second
third

Resources