Updating a service principles password with Terraform based on when it's going to expire
Setting the service principle up with a password the first time works perfectly, however, I want to expire the password and if the password is going to expire a new one gets generated and updates the service principle with it, I'm not entirely sure how to do conditionals in Terraform as I am still fairly new to Terraform, the docs don't really talk about updating the service principle only creating it and there is no data object to fetch when this is going to expire
So far I have this (full disclosure this is part of a bigger terraform base that I am helping with):
resource "azuread_application" "current" {
name = "test"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
As the password is only valid for 90 Days I want to run terraform apply just before it expires and update the password
Update 1:
It seems that if indeed you change the azuread_service_principal_password resource, it counts as a change in the dependency tree and recreates the resource you have attached the service principle to, which means there is no way to keep the state in of the service principles credentials in Terraform if they need to be updates
Update 2:
I have attempted to do the following, however the downside to this is that it runs everytime you run terraform apply:
terraform script:
resource "azuread_application" "current" {
name = "${var.metadata_name}"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
resource "null_resource" "password_updater" {
# Updates everytime you run terraform apply so it will run this script everytime
triggers {
timestamp = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh ${path.module}/update_service_password.sh ${azuread_service_principal.current.id} ${var.resource_group} ${azurerm_kubernetes_cluster.current.name}"
}
}
script:
#!/bin/sh
service_principle_id=$1
resource_group=$2
cluster_name=$3
# get service password expiration
expiration=$(az ad sp list --filter="objectId eq '$service_principle_id'" | jq '.[].passwordCredentials' | jq '.[].endDate' | cut -d'T' -f 1 | cut -d'"' -f 2)
# Format date for condition
now=$(date +%Y%m%d%H%M%S)
expiration_date=$(date -d "$expiration - 30 days" +%Y%m%d%H%M%S)
# Compare today with expiration date
if [ ${now} -ge ${expiration_date} ];
then
# IF expiration date in the next 30 days rest password
sp_id=$(az aks show -g ${resource_group} -n ${cluster_name} --query servicePrincipalProfile.clientId -o tsv)
service_principle_secret=$(az ad sp credential reset --name ${sp_id} --end-date $(date -d "+ 90 days" +%Y-%m-%d) --query password -o tsv)
# Update cluster with new password
az aks update-credentials \
--resource-group ${resource_group} \
--name ${cluster_name} \
--reset-service-principal \
--service-principal ${sp_id} \
--client-secret ${service_principle_secret}
fi
For the service principal, the password of it can be reset through the Azure CLI az ad sp reset, but you need to have the permission to do that.
I am just going to set this as the Answer as after talking to the developers of the service principle terraform module they have told me it is not possible any other way if, a better way is found please comment:
Answer:
Use the null_resource provider to run a script that runs the update -
resource "azuread_application" "current" {
name = "${var.metadata_name}"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
resource "null_resource" "password_updater" {
# Updates everytime you run terraform apply so it will run this script everytime
triggers {
timestamp = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh ${path.module}/update_service_password.sh ${azuread_service_principal.current.id} ${var.resource_group} ${azurerm_kubernetes_cluster.current.name}"
}
}
script:
#!/bin/sh
service_principle_id=$1
resource_group=$2
cluster_name=$3
# get service password expiration
expiration=$(az ad sp list --filter="objectId eq '$service_principle_id'" | jq '.[].passwordCredentials' | jq '.[].endDate' | cut -d'T' -f 1 | cut -d'"' -f 2)
# Format date for condition
now=$(date +%Y%m%d%H%M%S)
expiration_date=$(date -d "$expiration - 30 days" +%Y%m%d%H%M%S)
# Compare today with expiration date
if [ ${now} -ge ${expiration_date} ];
then
# IF expiration date in the next 30 days rest password
sp_id=$(az aks show -g ${resource_group} -n ${cluster_name} --query servicePrincipalProfile.clientId -o tsv)
service_principle_secret=$(az ad sp credential reset --name ${sp_id} --end-date $(date -d "+ 90 days" +%Y-%m-%d) --query password -o tsv)
# Update cluster with new password
az aks update-credentials \
--resource-group ${resource_group} \
--name ${cluster_name} \
--reset-service-principal \
--service-principal ${sp_id} \
--client-secret ${service_principle_secret}
fi
I think a better approach is this:
Your terraform code is most likely wrapped within a bigger process. Most likely you use bash to kick off the process and then terraform. If not - I suggest you do it as this is the best practice with terraform.
In your bash code before running terraform check the expiry of the relevant service principals using az cli, for example. (does not matter)
If expired, use the terraform taint command to mark the service principal password resources as tainted. I do not have the details, maybe you need to taint the service principal too. Maybe not.
If tainted, the terraform would recreate the resources and would regenerate the password.
Related
I'm currently working on a terraform script which creates a cloudflare zone and make some configurations and if user sets a boolean variable to true I need to delete this cloudflare zone. This cloudflare zone is in the enterprise plan. Can any of you help me to delete this cloudflare zone using my terraform script? I can downgrade the plan to free plan using a api request to cloudflare.
Is there any terraform function which can be used to delete a zone?
Code
resource "cloudflare_zone" "cloudflarecreatezone" {
count = var.delete ? 0 : 1
jump_start = "true"
zone = var.zone_name
type = "partial"
plan = "enterprise"
}
resource "cloudflare_waf_group" "Cloudflare_Joomla" {
count = var.delete ? 0 : 1
group_id = "dc85d7a0s342918s886s32056069dfa94"
zone_id = cloudflare_zone.cloudflarecreatezone[count.index].id
mode = "off"
}
resource "null_resource" "dg" {
count = var.delete ? 1 : 0
provisioner "local-exec" {
command = "id=curl -s -k -X GET
'https://api.cloudflare.com/client/v4/zones/?name=${var.zone_name}' -H \"X-Auth-Email: ${var.email}\" -H \"X-Auth-Key: ${var.api_key}\" -H \"Content-Type: application/json\"|awk -F ':' '{print $3}'|awk -F '\"' '{print $2}';curl -s -k -X PATCH -d '{\"plan\":{\"id\":\"0feeeeeeeeeeeeeeeeeeeeeeeeeeeeee\"}}' 'api.cloudflare.com/client/v4/zones/'$id -H \"X-Auth-Email: ${var.email}\" -H \"X-Auth-Key: ${var.api_key}\" -H \"Content-Type: application/json\"" interpreter = ["/bin/bash", "-c"]
}
}
resource "null_resource" "delete_zone" {
count = var.delete ? 1 : 0
}
TIA
I expect my script to be able to delete the cloudflare zone once the delete variable is set to true
I want to perform the exec operation only once per hour. Meaning, if it's now 12 then don't exec again until it's 13 o'clock.
The timestamp in combination with the fomatdate will result in timestamps that only differ every hour.
resource "null_resource" "helm_login" {
triggers = {
hour = formatdate("YYYYMMDDhh", timestamp())
}
provisioner "local-exec" {
command = <<-EOF
az acr login -n ${var.helm_chart_acr_fqdn} -t -o tsv --query accessToken \
| helm registry login ${var.helm_chart_acr_fqdn} \
-u "00000000-0000-0000-0000-000000000000" \
--password-stdin
EOF
}
The problem is that terraform reports that this value will be only known after appy and always wants to recreate the resource.
# module.k8s.null_resource.helm_login must be replaced
-/+ resource "null_resource" "helm_login" {
~ id = "4503742218368236410" -> (known after apply)
~ triggers = {
- "hour" = "2021112010"
} -> (known after apply) # forces replacement
}
I have observed similar issues where values are fetched from data and passed to resources on creation, forcing me to not use those data values but hard code them.
As you just find out terraform evaluates the timestamp function at runtime,
that is why we see the: (known after apply) # forces replacement
But we can do something about that to meet your goal, we can pass the hour as a parameter:
variable "hour" {
type = number
}
resource "null_resource" "test" {
triggers = {
hour = var.hour
}
provisioner "local-exec" {
command = "echo 'test'"
}
}
Then to call terraform we do:
hour=$(date +%G%m%d%H); sudo terraform apply -var="hour=$hour"
First run:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.test will be created
+ resource "null_resource" "test" {
+ id = (known after apply)
+ triggers = {
+ "hour" = "2021112011"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
null_resource.test: Creating...
null_resource.test: Provisioning with 'local-exec'...
null_resource.test (local-exec): Executing: ["/bin/sh" "-c" "echo 'test'"]
null_resource.test (local-exec): test
null_resource.test: Creation complete after 0s [id=6793564729560967989]
Second run:
null_resource.test: Refreshing state... [id=6793564729560967989]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
I have a null_resource that has a local-exec block making a curl with a google access token.
Since that's executed during a destroy, I am forced to define that as a triggers var.
Each time I do a terraform apply that null_resource is having to be replaced because google access token keeps changing.
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
access_token = data.google_client_config.current.access_token
project = var.project
group = each.value.group
env = each.value.env
}
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer ${self.triggers.access_token}"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
Is there a way to ignore changes to google access token, or is there a way not having to specify access token var within the triggers block?
I think you should still be able to accomplish this using the depends_on meta-argument and a separate resource for making the ephemeral access token available to the command during the destroy lifecycle.
resource "local_file" "access_token" {
content = data.google_client_config.current.access_token
filename = "/var/share/access-token"
}
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
project = var.project
group = each.value.group
env = each.value.env
}
depends_on = [local_file.access_token]
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer $(cat /var/share/access-token)"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
I guess another solution would be to pass some kind of credentials to the command through which you could obtain the access token for the related service account through API calls, or use Application Default Credentials if configured.
I want my terraform script to create the resource group only when it does not exist in Azure, otherwise it should skip the creation of resource group.
Well, you can use Terraform external to execute the CLI command to check if the resource group exists or not. And then use the result to determine whether the resource group will create. Here is an example:
./main.tf
provider "azurerm" {
features {}
}
variable "group_name" {}
variable "location" {
default = "East Asia"
}
data "external" "example" {
program = ["/bin/bash","./script.sh"]
query = {
group_name = var.group_name
}
}
resource "azurerm_resource_group" "example" {
count = data.external.example.result.exists == "true" ? 0 : 1
name = var.group_name
location = var.location
}
./script.sh
#!/bin/bash
eval "$(jq -r '#sh "GROUP_NAME=\(.group_name)"')"
result=$(az group exists -n $GROUP_NAME)
jq -n --arg exists "$result" '{"exists":$exists}'
Terraform is declarative, not imperative. When using Terraform you shouldn't need to check for existing resources
to validate your tf script
terraform plan
and to apply the tf script changes
terraform apply
This will validate the resources if it already exists and create if not
I am trying to run a Terraform deployment via a Shell script where within the Shell script I first dynamically collect the access key for my Azure storage account and assign it to a variable. I then want to use the variable in a -var assignment on the terraform command line. This method works great when configuring the backend for remote state but it is not working for doing a deployment. The other variables used in the template are being pulled from a terraform.tfvars file. Below is my Shell script and Terraform template:
Shell script:
#!/bin/bash
set -eo pipefail
subscription_name="Visual Studio Enterprise with MSDN"
tfstate_storage_resource_group="terraform-state-rg"
tfstate_storage_account="terraformtfstatesa"
az account set --subscription "$subscription_name"
tfstate_storage_access_key=$(
az storage account keys list \
--resource-group "$tfstate_storage_resource_group" \
--account-name "$tfstate_storage_account" \
--query '[0].value' -o tsv
)
echo $tfstate_storage_access_key
terraform apply \
-var "access_key=$tfstate_storage_access_key"
Deployment template:
provider "azurerm" {
subscription_id = "${var.sub_id}"
}
data "terraform_remote_state" "rg" {
backend = "azurerm"
config {
storage_account_name = "terraformtfstatesa"
container_name = "terraform-state"
key = "rg.stage.project.terraform.tfstate"
access_key = "${var.access_key}"
}
}
resource "azurerm_storage_account" "my_table" {
name = "${var.storage_account}"
resource_group_name = "${data.terraform_remote_state.rg.rgname}"
location = "${var.region}"
account_tier = "Standard"
account_replication_type = "LRS"
}
I have tried defining the variable in my terraform.tfvars file:
storage_account = "appastagesa"
les_table_name = "appatable
region = "eastus"
sub_id = "abc12345-099c-1234-1234-998899889988"
access_key = ""
The access_key definition appears to get ignored.
I then tried not using a terraform.tfvars file, and created the variables.tf file below:
variable storage_account {
description = "Name of the storage account to create"
default = "appastagesa"
}
variable les_table_name {
description = "Name of the App table to create"
default = "appatable"
}
variable region {
description = "The region where resources will be deployed (ex. eastus, eastus2, etc.)"
default = "eastus"
}
variable sub_id {
description = "The ID of the subscription to deploy into"
default = "abc12345-099c-1234-1234-998899889988"
}
variable access_key {}
I then modified my deploy.sh script to use the line below to run my terraform deployment:
terraform apply \
-var "access_key=$tfstate_storage_access_key" \
-var-file="variables.tf"
This results in the error invalid value "variables.tf" for flag -var-file: multiple map declarations not supported for variables Usage: terraform apply [options] [DIR-OR-PLAN] being thrown.
After playing with this for hours...I am almost embarrassed as to what the problem was but I am also frustrated with Terraform because of the time I wasted on this issue.
I had all of my variables defined in my variables.tf file with all but one having default values. For the one without a default value, I was passing it in as part of the command line. My command line was where the problem was. Because of all of the documentation I read, I thought I had to tell terraform what my variables file was by using the -var-file option. Turns out you don't and when I did it threw the error. Turns out all I had to do was use the -var option for the variable that had no defined default and terraform just automagically saw the variables.tf file. Frustrating. I am in love with Terraform but the one negative I would give it is that the documentation is lacking.