multiple commands in local-exec provisioner terraform - terraform

I'm trying to achieve something inside terraform local-exec provisioner. So it looks something like this currently
resource null_resource ledger_summary_backup {
provisioner "local-exec" {
command = <<EOT
BackupArn=`aws dynamodb create-backup --table-name ${var.environment_id}-AccountService-LedgerSummary --backup-name ${var.environment_id}-AccountService-LedgerSummary-Backup | grep -oP '(?<="BackupArn": ")[^"]*'`
aws dynamodb restore-table-from-backup --target-table-name ${var.environment_id}-AccountService-LedgerSummary-Duplicate --backup-arn $BackupArn
EOT
}
}
This code is inside a terraform script which creates a dynamo db table, inside the provisioner I'm taking the backup of the table and trying to save that ARN into variable BackupArn and trying to use it in the dynamodb restore job in the second line. But it's throwing error exit status 255. Output: 'BackupArn' is not recognized as an internal or external command
Can someone help on this? How can I get the backup ARN(if this is not the correct way) and use it in the subsequent restore command.

Related

Execute azure command in powrershell without writing error to console?

I am using powershell script in pipeline and the problem I have with this query.
$value = $(az appconfig kv show -n ThisisEnv --key thisisconfigkey) | ConvertFrom-Json
What this query does is get the data related to key if exist. If this key doesn't exist it give the error like
ERROR: Key 'abcdefg' with label 'None' does not exist.
It is work as expected. In pipeline when the key doesn't exist, it's printed a error on CLI. The pipeline see it as error and show it as failed. Is there a way I can make it work.
Is there a way I can stop it printing it on console. Any powershell operator which help me to get the value from azure command but also let me get it without print anything on console.
You could try to redirect the standard error using 2> $null
$value = $(az appconfig kv show -n ThisisEnv --key thisisconfigkey 2> $null) | ConvertFrom-Json
This will suppress the error within the console. You might also want to set the powerShellIgnoreLASTEXITCODE within the Azure CLI Task in order that the pipeline run doesn't fail - or as a workaround, set the $LASTEXITCODE to 0

Terraform local-exec to pass new resource( not hard-coding)

I'm executing a bash script using terraform. My null_resource configuration code is given below
resource "null_resource" "mytest" {
triggers = {
run = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh test.sh"
}
}
Let us take git repo as an example resource. I'm doing bunch of operations using the batch script on the resource. This resource names are passed using terraform.tfvars
resource_list = ["RS1","RS2","RS3","RS4"]
If I want to add a new repo named RS5 then I can update the the above list by adding RS5 to it.
How I will pass the new resource name to the batch script. Here I'm not looking to hard code the parameter as given below
sh test.sh "RS5"
How i will pass the most recent resource to the batch script?
Use for_each to execute the batch script once per resource in the resource_list variable
resource "null_resource" "mytest" {
for_each = toset(var.resource_list)
# using timestamp() will cause the script to run again for every value on
# every run since the value changes every time.
# you probably don't want that behavior.
# triggers = {
# run = "${timestamp()}"
# }
provisioner "local-exec" {
command = "sh test.sh ${each.value}"
}
}
When you run this the first time it will execute the script once for each of the resources in resource_list. Then the terraform state file will be updated to know what has already run, so any subsequent runs will only include new items added to resource_list.

Using data external to receive variable from bash script in terraform

I want to fetch sshkey with digital ocean token with get_sshkey.sh script:
do_token=$1
curl -X GET -s -H "Authorization: Bearer ${do_token//\"}" "https://api.digitalocean.com/v2/account/keys?page=1" | jq -r --arg queryname "User's key" '.ssh_keys[] | select(.name == $queryname).public_key'
I have a declared variable of DO token var.do-token, I am trying to use $1 bash concept to pass parameter to get_sshkey.sh and run it in terraform data "external" in following:
data "external" "fetchssh" {
program = ["sh", "/input/get_sshkey.sh `echo "var.do-token" | terraform -chdir=/input console -var-file terraform.auto.tfvars`"]
}
but get the error: Expected a comma to mark the beginning of the next item.
In order to include literal quotes in your shell command, you'll need to escape them so that Terraform can see that you don't intend to end the quoted string template:
data "external" "fetchssh" {
program = ["sh", "/input/get_sshkey.sh `echo \"var.do-token\" | terraform -chdir=/input console -var-file terraform.auto.tfvars`"]
}
Using the external data source in Terraform to recursively run another terraform is a pretty irregular thing to do, but as long as there's such a variable defined in that configuration I suppose it could work.

How to prevent the removal of a resource group when run terraform destroy?

I have already created resource group(not created using my code).
I run terraform apply and my infra was created. But when I run terraform destroy - the console says that my resource group should be deleted too. This should not happen, because not only my infra is in this resource group.
I have try to use terraform import as described here https://stackoverflow.com/a/47446540/10912908 and got the same result as before.
Also, I have tried to define the resource group with only name, but it is not work(. Terraform destroy removes this resource
resource "azurerm_resource_group" "testgroup" {
name = "Test-Group"
}
you have to not include resource group resource in the configuration for the resource group to not be destroyed (as all the resources in the configuration are to be destroyed). if you rely on outputs from that resource you can use data resource instead.
data "azurerm_resource_group" "test" {
name = "Test-Group"
}
OP also needed to remove resource group from the state file.
This bash script could work:
terraform state list | while read line
do
if [[ $line == azurerm_resource_group* ]]; then
echo $line " is a resource group and will not be deleted!"
else
echo "deleting: " $line
terraform destroy -target $line -auto-approve
fi
done
It lists all resources which are managed by terraform and then runs a delete script for every entry except for lines containing "azurerm_resource_group*"

Change VM size after provisioning

How can I downsize a virtual machine after its provisioning, from terraform script? Is there a way to update a resource without modifying the initial .tf file?
I have a solution, maybe you could try.
1.Copy your tf file, for example cp vm.tf vm_back.tf and move vm.tf to another directory.
2.Modify vm_size in vm_back.tf. I use this tf file, so I use the following command to change the value.
sed -i 's/vm_size = "Standard_DS1_v2"/vm_size = "Standard_DS2_v2"/g' vm_back.tf
3.Update VM size by executing terraform apply.
4.Remove vm_back.tf and mv vm.tf to original directory.
How about passing in a command line argument that is used in a conditional variable?
For example, declare a conditional value in your .tf file:
vm_size = "${var.vm_size == "small" ? var.small_vm : var.large_vm}"
And when you want to provision the small VM, you simply pass the vm_size variable in on the command line:
$ terraform apply -var="vm_size=small"

Resources