Delete a Cloudflare zone - terraform

I'm currently working on a terraform script which creates a cloudflare zone and make some configurations and if user sets a boolean variable to true I need to delete this cloudflare zone. This cloudflare zone is in the enterprise plan. Can any of you help me to delete this cloudflare zone using my terraform script? I can downgrade the plan to free plan using a api request to cloudflare.
Is there any terraform function which can be used to delete a zone?
Code
resource "cloudflare_zone" "cloudflarecreatezone" {
count = var.delete ? 0 : 1
jump_start = "true"
zone = var.zone_name
type = "partial"
plan = "enterprise"
}
resource "cloudflare_waf_group" "Cloudflare_Joomla" {
count = var.delete ? 0 : 1
group_id = "dc85d7a0s342918s886s32056069dfa94"
zone_id = cloudflare_zone.cloudflarecreatezone[count.index].id
mode = "off"
}
resource "null_resource" "dg" {
count = var.delete ? 1 : 0
provisioner "local-exec" {
command = "id=curl -s -k -X GET
'https://api.cloudflare.com/client/v4/zones/?name=${var.zone_name}' -H \"X-Auth-Email: ${var.email}\" -H \"X-Auth-Key: ${var.api_key}\" -H \"Content-Type: application/json\"|awk -F ':' '{print $3}'|awk -F '\"' '{print $2}';curl -s -k -X PATCH -d '{\"plan\":{\"id\":\"0feeeeeeeeeeeeeeeeeeeeeeeeeeeeee\"}}' 'api.cloudflare.com/client/v4/zones/'$id -H \"X-Auth-Email: ${var.email}\" -H \"X-Auth-Key: ${var.api_key}\" -H \"Content-Type: application/json\"" interpreter = ["/bin/bash", "-c"]
}
}
resource "null_resource" "delete_zone" {
count = var.delete ? 1 : 0
}
TIA
I expect my script to be able to delete the cloudflare zone once the delete variable is set to true

Related

Terraform "file name too long" when executing with "null_resource" "apply"

I'm trying to execute the following command:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
as part of a local Terraform exeuction as follows:
locals {
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})
}
resource "null_resource" "apply" {
triggers = {
kubeconfig = base64encode(local.kubeconfig)
cmd_patch = <<-EOT
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
EOT
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = self.triggers.kubeconfig
}
command = self.triggers.cmd_patch
}
}
Executing the same command outside of Terraform, plainly on the command line works fine.
However, I always get the following error when executing as part of the Terraform script:
│ ': exit status 1. Output:
│ iAic2FtcGxlLWNsdXN0ZXI...WaU5ERXdNekEiCg==":
│ open
│ ImFwaVZlcnNpb24iOiAidjEiy...RXdNekEiCg==:
│ file name too long
Anybody any ideas what the issues could be ?
As per my comment: the KUBECONFIG environment variable needs to be a list of configuration files and not the content of the file itself [1]:
The KUBECONFIG environment variable is a list of paths to configuration files.
The original problem was that the content of the file was encoded in base64 format [2] and used in that format without decoding it before. Thankfully, Terraform has both functions built-in, so using base64decode [3] would return the "normal" file content. Still, it would be the file content and not path to the config file. Based on the other comments, I guess the important thing to note is the additional_roles_aws_auth.yaml file has to be in the same directory as the root module. As the command is a bit more complicated, I am not sure if you could use Terraform built-in path object [4] to make sure the file is searched for in the root of the module:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat ${path.root}/additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
[1] https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
[2] https://www.terraform.io/language/functions/base64encode
[3] https://www.terraform.io/language/functions/base64decode
[4] https://www.terraform.io/language/expressions/references#filesystem-and-workspace-info
The base64 encoded kubeconfig is called in your command so you must decode it:
kubectl <YOUR_COMMAND> --kubeconfig <(echo $KUBECONFIG | base64 --decode)

Local-exec destroy triggers - ignore changes to google access token

I have a null_resource that has a local-exec block making a curl with a google access token.
Since that's executed during a destroy, I am forced to define that as a triggers var.
Each time I do a terraform apply that null_resource is having to be replaced because google access token keeps changing.
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
access_token = data.google_client_config.current.access_token
project = var.project
group = each.value.group
env = each.value.env
}
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer ${self.triggers.access_token}"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
Is there a way to ignore changes to google access token, or is there a way not having to specify access token var within the triggers block?
I think you should still be able to accomplish this using the depends_on meta-argument and a separate resource for making the ephemeral access token available to the command during the destroy lifecycle.
resource "local_file" "access_token" {
content = data.google_client_config.current.access_token
filename = "/var/share/access-token"
}
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
project = var.project
group = each.value.group
env = each.value.env
}
depends_on = [local_file.access_token]
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer $(cat /var/share/access-token)"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
I guess another solution would be to pass some kind of credentials to the command through which you could obtain the access token for the related service account through API calls, or use Application Default Credentials if configured.

How to restart EC2 instance using terraform without destroying them?

I am wondering how can we stop and restart the AWS ec2 instance created using terraform. is there any way to do that?
As you asked, for example, there is a limit on the comment, so posting as the answer using local-exec.
I assume that you already configure aws configure | aws configure --profile test using aws-cli.
Here is the complete example to reboot an instance, change VPC SG ID, subnet and key name etc
provider "aws" {
region = "us-west-2"
profile = "test"
}
resource "aws_instance" "ec2" {
ami = "ami-0f2176987ee50226e"
instance_type = "t2.micro"
associate_public_ip_address = false
subnet_id = "subnet-45454566645"
vpc_security_group_ids = ["sg-45454545454"]
key_name = "mytest-ec2key"
tags = {
Name = "Test EC2 Instance"
}
}
resource "null_resource" "reboo_instance" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m Warning! Restarting instance having id ${aws_instance.ec2.id}.................. \x1B[0m"
# aws ec2 reboot-instances --instance-ids ${aws_instance.ec2.id} --profile test
# To stop instance
aws ec2 stop-instances --instance-ids ${aws_instance.ec2.id} --profile test
echo "***************************************Rebooted****************************************************"
EOT
}
# this setting will trigger script every time,change it something needed
triggers = {
always_run = "${timestamp()}"
}
}
Now Run terraform apply
Once created and you want later to reboot or stop just call
terraform apply -target null_resource.reboo_instance
See the logs
I have found simpler way to do it.
provisioner "local-exec" {
command = "ssh -tt -o StrictHostKeyChecking=no
someuser#${aws_eip.ec2_public_ip.public_ip} sudo 'shutdown -r'"
}
Using remote-exec:
provisioner "remote-exec" {
inline = [
"sudo /usr/sbin/shutdown -r 1"
]
}
-r 1 is to delay the reboot and prevent remote-exec command exiting with non-zero code.

Updating Service Principle Password with Terraform

Updating a service principles password with Terraform based on when it's going to expire
Setting the service principle up with a password the first time works perfectly, however, I want to expire the password and if the password is going to expire a new one gets generated and updates the service principle with it, I'm not entirely sure how to do conditionals in Terraform as I am still fairly new to Terraform, the docs don't really talk about updating the service principle only creating it and there is no data object to fetch when this is going to expire
So far I have this (full disclosure this is part of a bigger terraform base that I am helping with):
resource "azuread_application" "current" {
name = "test"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
As the password is only valid for 90 Days I want to run terraform apply just before it expires and update the password
Update 1:
It seems that if indeed you change the azuread_service_principal_password resource, it counts as a change in the dependency tree and recreates the resource you have attached the service principle to, which means there is no way to keep the state in of the service principles credentials in Terraform if they need to be updates
Update 2:
I have attempted to do the following, however the downside to this is that it runs everytime you run terraform apply:
terraform script:
resource "azuread_application" "current" {
name = "${var.metadata_name}"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
resource "null_resource" "password_updater" {
# Updates everytime you run terraform apply so it will run this script everytime
triggers {
timestamp = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh ${path.module}/update_service_password.sh ${azuread_service_principal.current.id} ${var.resource_group} ${azurerm_kubernetes_cluster.current.name}"
}
}
script:
#!/bin/sh
service_principle_id=$1
resource_group=$2
cluster_name=$3
# get service password expiration
expiration=$(az ad sp list --filter="objectId eq '$service_principle_id'" | jq '.[].passwordCredentials' | jq '.[].endDate' | cut -d'T' -f 1 | cut -d'"' -f 2)
# Format date for condition
now=$(date +%Y%m%d%H%M%S)
expiration_date=$(date -d "$expiration - 30 days" +%Y%m%d%H%M%S)
# Compare today with expiration date
if [ ${now} -ge ${expiration_date} ];
then
# IF expiration date in the next 30 days rest password
sp_id=$(az aks show -g ${resource_group} -n ${cluster_name} --query servicePrincipalProfile.clientId -o tsv)
service_principle_secret=$(az ad sp credential reset --name ${sp_id} --end-date $(date -d "+ 90 days" +%Y-%m-%d) --query password -o tsv)
# Update cluster with new password
az aks update-credentials \
--resource-group ${resource_group} \
--name ${cluster_name} \
--reset-service-principal \
--service-principal ${sp_id} \
--client-secret ${service_principle_secret}
fi
For the service principal, the password of it can be reset through the Azure CLI az ad sp reset, but you need to have the permission to do that.
I am just going to set this as the Answer as after talking to the developers of the service principle terraform module they have told me it is not possible any other way if, a better way is found please comment:
Answer:
Use the null_resource provider to run a script that runs the update -
resource "azuread_application" "current" {
name = "${var.metadata_name}"
}
resource "azuread_service_principal" "current" {
application_id = "${azuread_application.current.application_id}"
}
resource "random_string" "password" {
length = 64
special = true
}
resource "azuread_service_principal_password" "current" {
service_principal_id = "${azuread_service_principal.current.id}"
value = "${random_string.password.result}"
end_date_relative = "2160h" # valid for 90 days
}
resource "null_resource" "password_updater" {
# Updates everytime you run terraform apply so it will run this script everytime
triggers {
timestamp = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh ${path.module}/update_service_password.sh ${azuread_service_principal.current.id} ${var.resource_group} ${azurerm_kubernetes_cluster.current.name}"
}
}
script:
#!/bin/sh
service_principle_id=$1
resource_group=$2
cluster_name=$3
# get service password expiration
expiration=$(az ad sp list --filter="objectId eq '$service_principle_id'" | jq '.[].passwordCredentials' | jq '.[].endDate' | cut -d'T' -f 1 | cut -d'"' -f 2)
# Format date for condition
now=$(date +%Y%m%d%H%M%S)
expiration_date=$(date -d "$expiration - 30 days" +%Y%m%d%H%M%S)
# Compare today with expiration date
if [ ${now} -ge ${expiration_date} ];
then
# IF expiration date in the next 30 days rest password
sp_id=$(az aks show -g ${resource_group} -n ${cluster_name} --query servicePrincipalProfile.clientId -o tsv)
service_principle_secret=$(az ad sp credential reset --name ${sp_id} --end-date $(date -d "+ 90 days" +%Y-%m-%d) --query password -o tsv)
# Update cluster with new password
az aks update-credentials \
--resource-group ${resource_group} \
--name ${cluster_name} \
--reset-service-principal \
--service-principal ${sp_id} \
--client-secret ${service_principle_secret}
fi
I think a better approach is this:
Your terraform code is most likely wrapped within a bigger process. Most likely you use bash to kick off the process and then terraform. If not - I suggest you do it as this is the best practice with terraform.
In your bash code before running terraform check the expiry of the relevant service principals using az cli, for example. (does not matter)
If expired, use the terraform taint command to mark the service principal password resources as tainted. I do not have the details, maybe you need to taint the service principal too. Maybe not.
If tainted, the terraform would recreate the resources and would regenerate the password.

how to get private IP of EC 2 dynamically and put it in /etc/hosts

I would like to create multiple EC2 instances using Terraform and write the private IP addresses of the instances to /etc/hosts on every instance.
Currently I am trying the following code but it's not working:
resource "aws_instance" "ceph-cluster" {
count = "${var.ceph_cluster_count}"
ami = "${var.app_ami}"
instance_type = "t2.small"
key_name = "${var.ssh_key_name}"
vpc_security_group_ids = [
"${var.vpc_ssh_sg_ids}",
"${aws_security_group.ceph.id}",
]
subnet_id = "${element(split(",", var.subnet_ids), count.index)}"
associate_public_ip_address = "true"
// TODO 一時的にIAM固定
//iam_instance_profile = "${aws_iam_instance_profile.app_instance_profile.name}"
iam_instance_profile = "${var.iam_role_name}"
root_block_device {
delete_on_termination = "true"
volume_size = "30"
volume_type = "gp2"
}
connection {
user = "ubuntu"
private_key = "${file("${var.ssh_key}")}"
agent = "false"
}
provisioner "file" {
source = "../../../scripts"
destination = "/home/ubuntu/"
}
tags {
Name = "${var.infra_name}-ceph-cluster-${count.index}"
InfraName = "${var.infra_name}"
}
provisioner "remote-exec" {
inline = [
"cat /etc/hosts",
"cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
"cp -arp ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
"chmod 700 ~/.ssh/ceph_rsa",
"echo 'IdentityFile ~/.ssh/ceph_rsa' >> ~/.ssh/config",
"echo 'User ubuntu' >> ~/.ssh/config",
"echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
"echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
]
}
}
aws_instance.ceph-cluster. *. private_ip
I would like to get the result of the above command and put it in /etc/hosts.
I had a similar need for a database cluster (some sort of poor's man Consul alternative), I ended using the following Terraform file:
variable "cluster_member_count" {
description = "Number of members in the cluster"
default = "3"
}
variable "cluster_member_name_prefix" {
description = "Prefix to use when naming cluster members"
default = "cluster-node-"
}
variable "aws_keypair_privatekey_filepath" {
description = "Path to SSH private key to SSH-connect to instances"
default = "./secrets/aws.key"
}
# EC2 instances
resource "aws_instance" "cluster_member" {
count = "${var.cluster_member_count}"
# ...
}
# Bash command to populate /etc/hosts file on each instances
resource "null_resource" "provision_cluster_member_hosts_file" {
count = "${var.cluster_member_count}"
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster_member.*.id)}"
}
connection {
type = "ssh"
host = "${element(aws_instance.cluster_member.*.public_ip, count.index)}"
user = "ec2-user"
private_key = "${file(var.aws_keypair_privatekey_filepath)}"
}
provisioner "remote-exec" {
inline = [
# Adds all cluster members' IP addresses to /etc/hosts (on each member)
"echo '${join("\n", formatlist("%v", aws_instance.cluster_member.*.private_ip))}' | awk 'BEGIN{ print \"\\n\\n# Cluster members:\" }; { print $0 \" ${var.cluster_member_name_prefix}\" NR-1 }' | sudo tee -a /etc/hosts > /dev/null",
]
}
}
One rule is that each cluster member get named by the cluster_member_name_prefix Terraform variable followed by the count index (starting at 0): cluster-node-0, cluster-node-1, etc.
This will add the following lines to each "aws_instance.cluster_member" resource's /etc/hosts file (the same exact lines and in the same order for every member):
# Cluster members:
10.0.1.245 cluster-node-0
10.0.1.198 cluster-node-1
10.0.1.153 cluster-node-2
In my case, the null_resource that populates the /etc/hosts file was triggered by an EBS volume attachment, but a "${join(",", aws_instance.cluster_member.*.id)}" trigger should work just fine too.
Also, for local development, I added a local-exec provisioner to locally write down each IP in a cluster_ips.txt file:
resource "null_resource" "write_resource_cluster_member_ip_addresses" {
depends_on = ["aws_instance.cluster_member"]
provisioner "local-exec" {
command = "echo '${join("\n", formatlist("instance=%v ; private=%v ; public=%v", aws_instance.cluster_member.*.id, aws_instance.cluster_member.*.private_ip, aws_instance.cluster_member.*.public_ip))}' | awk '{print \"node=${var.cluster_member_name_prefix}\" NR-1 \" ; \" $0}' > \"${path.module}/cluster_ips.txt\""
# Outputs is:
# node=cluster-node-0 ; instance=i-03b1f460318c2a1c3 ; private=10.0.1.245 ; public=35.180.50.32
# node=cluster-node-1 ; instance=i-05606bc6be9639604 ; private=10.0.1.198 ; public=35.180.118.126
# node=cluster-node-2 ; instance=i-0931cbf386b89ca4e ; private=10.0.1.153 ; public=35.180.50.98
}
}
And, with the following shell command I can add them to my local /etc/hosts file:
awk -F'[;=]' '{ print $8 " " $2 " #" $4 }' cluster_ips.txt >> /etc/hosts
Example:
35.180.50.32 cluster-node-0 # i-03b1f460318c2a1c3
35.180.118.126 cluster-node-1 # i-05606bc6be9639604
35.180.50.98 cluster-node-2 # i-0931cbf386b89ca4e
Terraform provisioners expose a self syntax for getting data about the resource being created.
If you were just interested in the instance being created's private IP address you could use ${self.private_ip} to get at this.
Unfortunately if you need to get the IP addresses of multiple sub-resources (eg ones created by using the count meta attribute) then you will need to do this outside of the resource's provisioner using the null_resource provider.
The resource provider docs show a good use case for this:
resource "aws_instance" "cluster" {
count = 3
...
}
resource "null_resource" "cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
}
# Bootstrap script can run on any instance of the cluster
# So we just choose the first in this case
connection {
host = "${element(aws_instance.cluster.*.public_ip, 0)}"
}
provisioner "remote-exec" {
# Bootstrap script called with private_ip of each node in the clutser
inline = [
"bootstrap-cluster.sh ${join(" ", aws_instance.cluster.*.private_ip)}",
]
}
}
but in your case you probably want something like:
resource "aws_instance" "ceph-cluster" {
...
}
resource "null_resource" "ceph-cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.ceph-cluster.*.id)}"
}
connection {
host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
}
provisioner "remote-exec" {
inline = [
"cat /etc/hosts",
"cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
"cp -arp ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
"chmod 700 ~/.ssh/ceph_rsa",
"echo 'IdentityFile ~/.ssh/ceph_rsa' >> ~/.ssh/config",
"echo 'User ubuntu' >> ~/.ssh/config",
"echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
"echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
]
}
}
This could be a piece of cake with Terrafrom/Sparrowform. No need in null_resources, with the minimum of fuss:
Bootstrap infrastructure
$ terrafrom apply
Prepare Sparrowform privision scenario to insert ALL nodes public ips / dns names into every node's /etc/hosts file
$ cat sparrowfile
#!/usr/bin/env perl6
use Sparrowform;
my #hosts = (
"127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4",
"::1 localhost localhost.localdomain localhost6 localhost6.localdomain6"
);
for tf-resources() -> $r {
my $rd = $r[1]; # resource data
next unless $rd<public_ip>;
next unless $rd<public_dns>;
next if $rd<public_ip> eq input_params('Host');
push #hosts, $rd<public_ip> ~ ' ' ~ $rd<public_dns>;
}
file '/etc/hosts', %(
action => 'create',
content => #hosts.join("\n")
);
Give it a run, Sparrowform will execute scenario on every node
$ sparrowform --bootstrap --ssh_private_key=~/.ssh/aws.key --ssh_user=ec2-user
PS. disclosure - I am the tool author

Resources