I'm executing a bash script using terraform. My null_resource configuration code is given below
resource "null_resource" "mytest" {
triggers = {
run = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh test.sh"
}
}
Let us take git repo as an example resource. I'm doing bunch of operations using the batch script on the resource. This resource names are passed using terraform.tfvars
resource_list = ["RS1","RS2","RS3","RS4"]
If I want to add a new repo named RS5 then I can update the the above list by adding RS5 to it.
How I will pass the new resource name to the batch script. Here I'm not looking to hard code the parameter as given below
sh test.sh "RS5"
How i will pass the most recent resource to the batch script?
Use for_each to execute the batch script once per resource in the resource_list variable
resource "null_resource" "mytest" {
for_each = toset(var.resource_list)
# using timestamp() will cause the script to run again for every value on
# every run since the value changes every time.
# you probably don't want that behavior.
# triggers = {
# run = "${timestamp()}"
# }
provisioner "local-exec" {
command = "sh test.sh ${each.value}"
}
}
When you run this the first time it will execute the script once for each of the resources in resource_list. Then the terraform state file will be updated to know what has already run, so any subsequent runs will only include new items added to resource_list.
Related
I've always been puzzled why I cannot create files in $HOME directory using user_data when using an aws_instance resource. Even a simple "touch a.txt" in user_data would not create the file.
I have worked around this by creating files in other directories (e.g. /etc/some_file.txt) instead. But I am really curious what's the reason behind this & if there is a way to create files in $HOME with user_data.
Thank you.
----- 1st edit -----
Sample code:
resource "aws_instance" "ubuntu" {
ami = var.ubuntu_ami
instance_type = var.ubuntu_instance_type
subnet_id = aws_subnet.ubuntu_subnet.id
associate_public_ip_address = "true"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.standard_sg.id]
user_data = <<-BOOTSTRAP
#!/bin/bash
touch /etc/1.txt # this file is created in /etc/1.txt
touch 2.txt # 2.txt is not created in $HOME/2.txt
BOOTSTRAP
tags = {
Name = "${var.project}_eks_master_${count.index + 1}"
}
}
I am not sure what is the default path used at user_data but I did a simple test and I found the solution to your problem.
In an EC2 Instance, I tried this in my user_data
user_data = <<-EOF
#! /bin/bash
sudo bash -c "pwd > /var/www/html/path.html"
The result was this:
root#ip-10-0-10-10:~# cat /var/www/html/path.html
/
Did you check if you have this file created?
ls -l /2.txt
Feel free to reach me if you have any doubts.
I think I found the answer to my own question. The $HOME environment variable does not exist at the time the user_data script is run.
I tried to 'echo $HOME >> /etc/a.txt' and I got a blank line. And instead of creating a file using 'touch $HOME/1.txt', I tried 'touch /home/ubuntu/1.txt' and the file 1.txt was created.
So, I can only conclude that $HOME does not exist at the time user_data was run.
----- Update 1 -----
Did some further testing to support my findings above. When I ran sudo bash -c 'echo $HOME > /etc/a.txt', it gave me the result of /root in the file /etc/a.txt. But when I ran echo $HOME > /etc/b.txt, the file /etc/b.txt contained 0xA (just a single linefeed character).
Did another test by running set > /etc/c.txt to see if $HOME was defined & $HOME didn't exist amongst the environment variables listed in /etc/c.txt. But once the instance was up, and I ran set via an SSH session, $HOME existed & had the value /home/ubuntu.
I also wondered who was running during the initialization so I tried who am i > /etc/d.txt. And /etc/d.txt was a 0-byte file. So, now I don't know which user is running during the EC2 instantiation.
I am running a docker container to build python packages. I am using the null_resource and local-exec for doing this as shown below:
resource "null_resource" "install_dependencies" {
provisioner "local-exec" {
command = "docker run -v (dirname "{PWD}"):/var/task 'public.ecr.aws/sam/build-${local.python_runtime}' /bin/sh -c 'pip install -r ${var.package_requirements_path} -t ${var.lambda_function_source_directory};exit'"
}
triggers = {
dependencies_versions = filemd5(var.package_requirements_path)
}
}
However, I get │ An argument definition must end with a newline. My question is, what does this error mean?
It fixed after running formatting on it.
fmt
I use PyCharm so I run formatting using IDE.
I'm trying to achieve something inside terraform local-exec provisioner. So it looks something like this currently
resource null_resource ledger_summary_backup {
provisioner "local-exec" {
command = <<EOT
BackupArn=`aws dynamodb create-backup --table-name ${var.environment_id}-AccountService-LedgerSummary --backup-name ${var.environment_id}-AccountService-LedgerSummary-Backup | grep -oP '(?<="BackupArn": ")[^"]*'`
aws dynamodb restore-table-from-backup --target-table-name ${var.environment_id}-AccountService-LedgerSummary-Duplicate --backup-arn $BackupArn
EOT
}
}
This code is inside a terraform script which creates a dynamo db table, inside the provisioner I'm taking the backup of the table and trying to save that ARN into variable BackupArn and trying to use it in the dynamodb restore job in the second line. But it's throwing error exit status 255. Output: 'BackupArn' is not recognized as an internal or external command
Can someone help on this? How can I get the backup ARN(if this is not the correct way) and use it in the subsequent restore command.
I have already created resource group(not created using my code).
I run terraform apply and my infra was created. But when I run terraform destroy - the console says that my resource group should be deleted too. This should not happen, because not only my infra is in this resource group.
I have try to use terraform import as described here https://stackoverflow.com/a/47446540/10912908 and got the same result as before.
Also, I have tried to define the resource group with only name, but it is not work(. Terraform destroy removes this resource
resource "azurerm_resource_group" "testgroup" {
name = "Test-Group"
}
you have to not include resource group resource in the configuration for the resource group to not be destroyed (as all the resources in the configuration are to be destroyed). if you rely on outputs from that resource you can use data resource instead.
data "azurerm_resource_group" "test" {
name = "Test-Group"
}
OP also needed to remove resource group from the state file.
This bash script could work:
terraform state list | while read line
do
if [[ $line == azurerm_resource_group* ]]; then
echo $line " is a resource group and will not be deleted!"
else
echo "deleting: " $line
terraform destroy -target $line -auto-approve
fi
done
It lists all resources which are managed by terraform and then runs a delete script for every entry except for lines containing "azurerm_resource_group*"
How can I downsize a virtual machine after its provisioning, from terraform script? Is there a way to update a resource without modifying the initial .tf file?
I have a solution, maybe you could try.
1.Copy your tf file, for example cp vm.tf vm_back.tf and move vm.tf to another directory.
2.Modify vm_size in vm_back.tf. I use this tf file, so I use the following command to change the value.
sed -i 's/vm_size = "Standard_DS1_v2"/vm_size = "Standard_DS2_v2"/g' vm_back.tf
3.Update VM size by executing terraform apply.
4.Remove vm_back.tf and mv vm.tf to original directory.
How about passing in a command line argument that is used in a conditional variable?
For example, declare a conditional value in your .tf file:
vm_size = "${var.vm_size == "small" ? var.small_vm : var.large_vm}"
And when you want to provision the small VM, you simply pass the vm_size variable in on the command line:
$ terraform apply -var="vm_size=small"