I have a null_resource that has a local-exec block making a curl with a google access token.
Since that's executed during a destroy, I am forced to define that as a triggers var.
Each time I do a terraform apply that null_resource is having to be replaced because google access token keeps changing.
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
access_token = data.google_client_config.current.access_token
project = var.project
group = each.value.group
env = each.value.env
}
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer ${self.triggers.access_token}"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
Is there a way to ignore changes to google access token, or is there a way not having to specify access token var within the triggers block?
I think you should still be able to accomplish this using the depends_on meta-argument and a separate resource for making the ephemeral access token available to the command during the destroy lifecycle.
resource "local_file" "access_token" {
content = data.google_client_config.current.access_token
filename = "/var/share/access-token"
}
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
project = var.project
group = each.value.group
env = each.value.env
}
depends_on = [local_file.access_token]
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer $(cat /var/share/access-token)"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
I guess another solution would be to pass some kind of credentials to the command through which you could obtain the access token for the related service account through API calls, or use Application Default Credentials if configured.
Related
I need to get the repository id of an existing project to work on that repo. There seems no other way than using Azure DevOps REST API.
I tried to utilize the REST API to get the repo id in my terraform code:
data "http" "example" {
url = "https://dev.azure.com/{organization}/{project}/_apis/git/repositories?api-version=6.0"
request_headers = {
"Authorization" = "Basic ${base64encode("PAT:${var.personal_access_token}")}"
}
}
output "repository_id" {
value = data.http.example.json.value[0].id
}
It yields error while I was running terraform plan:
Error: Unsupported attribute
line 29, in output "repository_id":
29: value = data.http.example.json.value[0].id
I tried also with jsondecode (jq is already installed):
resource "null_resource" "example" {
provisioner "local-exec" {
command = "curl -s -H 'Authorization: Bearer ${var.pat}' https://dev.azure.com/{organization}/{project}/_apis/git/repositories?api-version=6.0 | jq '.value[0].id'"
interpreter = ["bash", "-c"]
}
}
output "repo_id" {
value = "${jsondecode(null_resource.example.stdout).id}"
}
That did not work either!!
Azure DevOps REST API works fine, I just cannot fetch the value from the responce into terraform! What would be the right code or can it be done without using REST API!
Thank you!
To proper way to interact with external API and return its output to TF is through external data source. TF docs linked provide example of how to use and create such a data source.
I'm currently working on a terraform script which creates a cloudflare zone and make some configurations and if user sets a boolean variable to true I need to delete this cloudflare zone. This cloudflare zone is in the enterprise plan. Can any of you help me to delete this cloudflare zone using my terraform script? I can downgrade the plan to free plan using a api request to cloudflare.
Is there any terraform function which can be used to delete a zone?
Code
resource "cloudflare_zone" "cloudflarecreatezone" {
count = var.delete ? 0 : 1
jump_start = "true"
zone = var.zone_name
type = "partial"
plan = "enterprise"
}
resource "cloudflare_waf_group" "Cloudflare_Joomla" {
count = var.delete ? 0 : 1
group_id = "dc85d7a0s342918s886s32056069dfa94"
zone_id = cloudflare_zone.cloudflarecreatezone[count.index].id
mode = "off"
}
resource "null_resource" "dg" {
count = var.delete ? 1 : 0
provisioner "local-exec" {
command = "id=curl -s -k -X GET
'https://api.cloudflare.com/client/v4/zones/?name=${var.zone_name}' -H \"X-Auth-Email: ${var.email}\" -H \"X-Auth-Key: ${var.api_key}\" -H \"Content-Type: application/json\"|awk -F ':' '{print $3}'|awk -F '\"' '{print $2}';curl -s -k -X PATCH -d '{\"plan\":{\"id\":\"0feeeeeeeeeeeeeeeeeeeeeeeeeeeeee\"}}' 'api.cloudflare.com/client/v4/zones/'$id -H \"X-Auth-Email: ${var.email}\" -H \"X-Auth-Key: ${var.api_key}\" -H \"Content-Type: application/json\"" interpreter = ["/bin/bash", "-c"]
}
}
resource "null_resource" "delete_zone" {
count = var.delete ? 1 : 0
}
TIA
I expect my script to be able to delete the cloudflare zone once the delete variable is set to true
I've an EKS cluster deployed in AWS and I use terraform to deploy components to that cluster.
In order to get authenticated I'm using the following EKS datasources that provides the cluster API Authentication:
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_id
}
data "aws_vpc" "eks_vpc" {
id = var.vpc_id
}
And using the token inside several local-exec provisioners (apart of other resources) to deploy components
resource "null_resource" "deployment" {
provisioner "local-exec" {
working_dir = path.module
command = <<EOH
kubectl \
--server="${data.aws_eks_cluster.cluster.endpoint}" \
--certificate-authority=./ca.crt \
--token="${data.aws_eks_cluster_auth.cluster.token}" \
apply -f test.yaml
EOH
}
}
The problem I have is that some resources are taking a little while to deploy and at some point when terraform executes the next resource I get this error because the token has expired:
exit status 1. Output: error: You must be logged in to the server (the server has asked for the client to provide credentials)
Is there a way to force re-creation of the data before running the local-execs?
UPDATE: example moved to https://github.com/aidanmelen/terraform-kubernetes-rbac/blob/main/examples/authn_authz/main.tf
The data.aws_eks_cluster_auth.cluster_auth.token creates a token with a non-configurable 15 minute timeout.
One way to get around this is to use the sts token to create a long-lived service-account token and use that to provision the terraform-kubernetes-provider for long running kuberenetes resources.
I created a module called terraform-kubernetes-service-account to capture this common behavior of creating a service account, giving it some permissions, and output the auth information i.e. token, ca.crt, namespace.
For example:
module "terraform_admin" {
source = "aidanmelen/service-account/kubernetes"
name = "terraform-admin"
namespace = "kube-system"
cluster_role_name = "terraform-admin"
cluster_role_rules = [
{
api_groups = ["*"]
resources = ["*"]
resource_names = ["*"]
verbs = ["*"]
},
]
}
provider "kubernetes" {
alias = "terraform_admin_service_account"
host = "https://kubernetes.docker.internal:6443"
cluster_ca_certificate = module.terraform_admin.auth["ca.crt"]
token = module.terraform_admin.auth["token"]
}
data "kubernetes_namespace_v1" "example" {
metadata {
name = kubernetes_namespace.ex_complete.metadata[0].name
}
}
Updating a service principles password with Terraform based on when it's going to expire
Setting the service principle up with a password the first time works perfectly, however, I want to expire the password and if the password is going to expire a new one gets generated and updates the service principle with it, I'm not entirely sure how to do conditionals in Terraform as I am still fairly new to Terraform, the docs don't really talk about updating the service principle only creating it and there is no data object to fetch when this is going to expire
So far I have this (full disclosure this is part of a bigger terraform base that I am helping with):
resource "azuread_application" "current" {
name = "test"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
As the password is only valid for 90 Days I want to run terraform apply just before it expires and update the password
Update 1:
It seems that if indeed you change the azuread_service_principal_password resource, it counts as a change in the dependency tree and recreates the resource you have attached the service principle to, which means there is no way to keep the state in of the service principles credentials in Terraform if they need to be updates
Update 2:
I have attempted to do the following, however the downside to this is that it runs everytime you run terraform apply:
terraform script:
resource "azuread_application" "current" {
name = "${var.metadata_name}"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
resource "null_resource" "password_updater" {
# Updates everytime you run terraform apply so it will run this script everytime
triggers {
timestamp = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh ${path.module}/update_service_password.sh ${azuread_service_principal.current.id} ${var.resource_group} ${azurerm_kubernetes_cluster.current.name}"
}
}
script:
#!/bin/sh
service_principle_id=$1
resource_group=$2
cluster_name=$3
# get service password expiration
expiration=$(az ad sp list --filter="objectId eq '$service_principle_id'" | jq '.[].passwordCredentials' | jq '.[].endDate' | cut -d'T' -f 1 | cut -d'"' -f 2)
# Format date for condition
now=$(date +%Y%m%d%H%M%S)
expiration_date=$(date -d "$expiration - 30 days" +%Y%m%d%H%M%S)
# Compare today with expiration date
if [ ${now} -ge ${expiration_date} ];
then
# IF expiration date in the next 30 days rest password
sp_id=$(az aks show -g ${resource_group} -n ${cluster_name} --query servicePrincipalProfile.clientId -o tsv)
service_principle_secret=$(az ad sp credential reset --name ${sp_id} --end-date $(date -d "+ 90 days" +%Y-%m-%d) --query password -o tsv)
# Update cluster with new password
az aks update-credentials \
--resource-group ${resource_group} \
--name ${cluster_name} \
--reset-service-principal \
--service-principal ${sp_id} \
--client-secret ${service_principle_secret}
fi
For the service principal, the password of it can be reset through the Azure CLI az ad sp reset, but you need to have the permission to do that.
I am just going to set this as the Answer as after talking to the developers of the service principle terraform module they have told me it is not possible any other way if, a better way is found please comment:
Answer:
Use the null_resource provider to run a script that runs the update -
resource "azuread_application" "current" {
name = "${var.metadata_name}"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
resource "null_resource" "password_updater" {
# Updates everytime you run terraform apply so it will run this script everytime
triggers {
timestamp = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh ${path.module}/update_service_password.sh ${azuread_service_principal.current.id} ${var.resource_group} ${azurerm_kubernetes_cluster.current.name}"
}
}
script:
#!/bin/sh
service_principle_id=$1
resource_group=$2
cluster_name=$3
# get service password expiration
expiration=$(az ad sp list --filter="objectId eq '$service_principle_id'" | jq '.[].passwordCredentials' | jq '.[].endDate' | cut -d'T' -f 1 | cut -d'"' -f 2)
# Format date for condition
now=$(date +%Y%m%d%H%M%S)
expiration_date=$(date -d "$expiration - 30 days" +%Y%m%d%H%M%S)
# Compare today with expiration date
if [ ${now} -ge ${expiration_date} ];
then
# IF expiration date in the next 30 days rest password
sp_id=$(az aks show -g ${resource_group} -n ${cluster_name} --query servicePrincipalProfile.clientId -o tsv)
service_principle_secret=$(az ad sp credential reset --name ${sp_id} --end-date $(date -d "+ 90 days" +%Y-%m-%d) --query password -o tsv)
# Update cluster with new password
az aks update-credentials \
--resource-group ${resource_group} \
--name ${cluster_name} \
--reset-service-principal \
--service-principal ${sp_id} \
--client-secret ${service_principle_secret}
fi
I think a better approach is this:
Your terraform code is most likely wrapped within a bigger process. Most likely you use bash to kick off the process and then terraform. If not - I suggest you do it as this is the best practice with terraform.
In your bash code before running terraform check the expiry of the relevant service principals using az cli, for example. (does not matter)
If expired, use the terraform taint command to mark the service principal password resources as tainted. I do not have the details, maybe you need to taint the service principal too. Maybe not.
If tainted, the terraform would recreate the resources and would regenerate the password.
Has anyone come up with a decent way to do this?
In short, you have a provider "aws", configured via env vars or profile, with or without sts, it doesn't matter. Maybe you have several.
Now you want to call out to the aws cli because something isn't well implemented in the aws provider. In my case, I need to generate and upload some sensitive information directly to an S3 bucket that I do not want in the state file. In any case, it was s3 sync, so the action is idempotent.
However, there appears to be no way to pass the provider credentials - permanent, env var, profile to temporary sts - to a null_resource clause:
provider "aws" {
# set using explicit setting or profile or however
alias = "myaws"
}
resource "null_resource" "cli" {
provisioner "local-exec" {
command = "aws <do something>"
environment {
# happy to pass AWS_PROFILE or AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY here...
# if there were a way to retrieve it from the "myaws" provider
}
}
}
You can pass an AWS role_arn into a local-exec script. For example:
variable "aws_role" {
type = string
description = "AWS role for local exec to assume"
default = "arn:aws:iam::123456789012:role/DBMigrateRole"
}
resource "null_resource" "call-db-migrate" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOF
set -e
CREDENTIALS=(`aws sts assume-role \
--role-arn ${var.aws_role} \
--role-session-name "db-migration-cli" \
--query "[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]" \
--output text`)
unset AWS_PROFILE
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID="$${CREDENTIALS[0]}"
export AWS_SECRET_ACCESS_KEY="$${CREDENTIALS[1]}"
export AWS_SESSION_TOKEN="$${CREDENTIALS[2]}"
aws sts get-caller-identity
EOF
}
}
Credit to https://github.com/hashicorp/terraform-provider-aws/issues/8242#issuecomment-586687360 .