Change VM size after provisioning - azure

How can I downsize a virtual machine after its provisioning, from terraform script? Is there a way to update a resource without modifying the initial .tf file?

I have a solution, maybe you could try.
1.Copy your tf file, for example cp vm.tf vm_back.tf and move vm.tf to another directory.
2.Modify vm_size in vm_back.tf. I use this tf file, so I use the following command to change the value.
sed -i 's/vm_size = "Standard_DS1_v2"/vm_size = "Standard_DS2_v2"/g' vm_back.tf
3.Update VM size by executing terraform apply.
4.Remove vm_back.tf and mv vm.tf to original directory.

How about passing in a command line argument that is used in a conditional variable?
For example, declare a conditional value in your .tf file:
vm_size = "${var.vm_size == "small" ? var.small_vm : var.large_vm}"
And when you want to provision the small VM, you simply pass the vm_size variable in on the command line:
$ terraform apply -var="vm_size=small"

Related

Unable to create files in $HOME directory using user_data

I've always been puzzled why I cannot create files in $HOME directory using user_data when using an aws_instance resource. Even a simple "touch a.txt" in user_data would not create the file.
I have worked around this by creating files in other directories (e.g. /etc/some_file.txt) instead. But I am really curious what's the reason behind this & if there is a way to create files in $HOME with user_data.
Thank you.
----- 1st edit -----
Sample code:
resource "aws_instance" "ubuntu" {
ami = var.ubuntu_ami
instance_type = var.ubuntu_instance_type
subnet_id = aws_subnet.ubuntu_subnet.id
associate_public_ip_address = "true"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.standard_sg.id]
user_data = <<-BOOTSTRAP
#!/bin/bash
touch /etc/1.txt # this file is created in /etc/1.txt
touch 2.txt # 2.txt is not created in $HOME/2.txt
BOOTSTRAP
tags = {
Name = "${var.project}_eks_master_${count.index + 1}"
}
}
I am not sure what is the default path used at user_data but I did a simple test and I found the solution to your problem.
In an EC2 Instance, I tried this in my user_data
user_data = <<-EOF
#! /bin/bash
sudo bash -c "pwd > /var/www/html/path.html"
The result was this:
root#ip-10-0-10-10:~# cat /var/www/html/path.html
/
Did you check if you have this file created?
ls -l /2.txt
Feel free to reach me if you have any doubts.
I think I found the answer to my own question. The $HOME environment variable does not exist at the time the user_data script is run.
I tried to 'echo $HOME >> /etc/a.txt' and I got a blank line. And instead of creating a file using 'touch $HOME/1.txt', I tried 'touch /home/ubuntu/1.txt' and the file 1.txt was created.
So, I can only conclude that $HOME does not exist at the time user_data was run.
----- Update 1 -----
Did some further testing to support my findings above. When I ran sudo bash -c 'echo $HOME > /etc/a.txt', it gave me the result of /root in the file /etc/a.txt. But when I ran echo $HOME > /etc/b.txt, the file /etc/b.txt contained 0xA (just a single linefeed character).
Did another test by running set > /etc/c.txt to see if $HOME was defined & $HOME didn't exist amongst the environment variables listed in /etc/c.txt. But once the instance was up, and I ran set via an SSH session, $HOME existed & had the value /home/ubuntu.
I also wondered who was running during the initialization so I tried who am i > /etc/d.txt. And /etc/d.txt was a 0-byte file. So, now I don't know which user is running during the EC2 instantiation.

Terraform local-exec to pass new resource( not hard-coding)

I'm executing a bash script using terraform. My null_resource configuration code is given below
resource "null_resource" "mytest" {
triggers = {
run = "${timestamp()}"
}
provisioner "local-exec" {
command = "sh test.sh"
}
}
Let us take git repo as an example resource. I'm doing bunch of operations using the batch script on the resource. This resource names are passed using terraform.tfvars
resource_list = ["RS1","RS2","RS3","RS4"]
If I want to add a new repo named RS5 then I can update the the above list by adding RS5 to it.
How I will pass the new resource name to the batch script. Here I'm not looking to hard code the parameter as given below
sh test.sh "RS5"
How i will pass the most recent resource to the batch script?
Use for_each to execute the batch script once per resource in the resource_list variable
resource "null_resource" "mytest" {
for_each = toset(var.resource_list)
# using timestamp() will cause the script to run again for every value on
# every run since the value changes every time.
# you probably don't want that behavior.
# triggers = {
# run = "${timestamp()}"
# }
provisioner "local-exec" {
command = "sh test.sh ${each.value}"
}
}
When you run this the first time it will execute the script once for each of the resources in resource_list. Then the terraform state file will be updated to know what has already run, so any subsequent runs will only include new items added to resource_list.

Cannot get the outputs when terraform output is run

I am making use of modules. And this is the structure of my files - Provisioner and module are different folders.The main.tf in stack calls out the modules.
> provisioner
>stack
|--main.tf
|--variables.tf
> module (folder)
|--aks
| |--main.tf
| |--outputs.tf
| |--variables.tf
|
|--postgresql
| |--main.tf
| |--outputs.tf
| |--variables.tf
When i run "terraform apply" command in the provsioner directory,it is expected to return the outputs after the apply is done. I dont get outputs.
When i run 'terraform output' i get- "The state file either has no outputs defined, or all the defined outputs are empty. Please define an output in your configuration with the output keyword and run terraform refresh for it to become available. If you are using interpolation, please verify the interpolated value is not empty"
I would want to know why is this happening?
Terraform 0.12 and later intentionally track only root module outputs in the state. To expose module outputs for external consumption, you must export them from the root module using an output block, which as of 0.12 can now be done for a single module all in one output, like this:
output "example_module" {
value = module.example_module
}
So for your code, add a output.tf file in the root and then add output statement whichever you need output after apply.
Github issue : https://github.com/hashicorp/terraform/issues/22126
Adding on to #crewy_stack answer. let's say your module is named as sample_ec2_mod. Inside the module directory ensure that the outputs are specified in outputs.tf
On the main.tf in the root folder, add the following:
module "sample_ec2_mod" {
source = "./ec2"
}
output "ec2_module" {
value = module.sample_ec2_mod
}
When you enter terraform apply in your cli, it should give you
an output option. After applying, simply use terraform output to see
the outputs.
Output vars of modules won't be printed out by default. You have to explicitly define the output vars in your provisioner dir.

How can I read environment variables from a .env file into a terraform script?

I'm building a lambda function on aws with terraform. The syntax within the terraform script for uploading an env var is:
resource "aws_lambda_function" "name_of_function" {
...
environment {
variables = {
foo = "bar"
}
}
}
Now I have a .env file in my repo with a bunch of variables, e.g. email='admin#example.com', and I'd like terraform to read (some of) them from that file and inject them into the space for env vars to be uploaded and available to the lambda function. How can this be done?
This is a pure TerraForm-only solution that parses the .env file into a native TerraForm map.
output "foo" {
value = { for tuple in regexall("(.*)=(.*)", file("/path/to/.env")) : tuple[0] => tuple[1] }
}
I have defined it as an output to do quick tests via CLI but you can always define it as a local or directly on an argument.
My solution for now involves 3 steps.
Place your variables in a .env file like so:
export TF_VAR_FOO=bar
Yes, it's slightly annoying that you HAVE TO prefix your vars with TF_VAR_, but I've learned to live with it (at least it makes it clear in your .env what vars will be used by terraform, and which will not.)
In your TF script, you have to declare any such variable without the TF_VAR_ prefix. E.g.
variable "FOO" {
description = "Optionally say something about this variable"
}
Before you run terraform, you need to source your .env file (. .env) so that any such variable is available to processes you want to launch in your present shell environment (viz. terraform here). Adding this step doesn't bother me since I always operate my repo with bash scripts anyway.
Note: I put in a request for a better way to handle .env files here though I'm actually now quite happy with things as is (it was just poorly described in the official documentation how TF relates to .env files IMO).
Based on #Uizz Underweasel response, I just add the sensitive function for security purposes:
Definition in variables.tf - more secure
locals {
envs = { for tuple in regexall("(.*)=(.*)", file(".env")) : tuple[0] => sensitive(tuple[1]) }
}
Definition in variables.tf - less secure
locals {
envs = { for tuple in regexall("(.*)=(.*)", file(".env")) : tuple[0] => tuple[1] }
}
An example of .env
HOST=127.0.0.1
USER=my_user
PASSWORD=pass123
The usage
output "envs" {
value = local.envs["HOST"]
sensitive = true # this is required if the sensitive function was used when loading .env file (more secure way)
}
Building on top of the great advice from #joe-jacobs, to avoid having to prepend to all variables with the TF_VAR_ prefix, encapsulate the call to the terraform command with the excellent https://github.com/cloudposse/tfenv utility.
This means you can leave the vars defined as FOO=bar in the .env file, useful to re-use them for other purposes.
Then run a command like:
dotenv run tfenv terraform plan
Terraform will be able to find an env variable TF_VAR_FOO set to bar. 🎉
You could use terraform.tfvars or .auto.tfvars file. The second one is probably more alike the .env, because it's hidden too.
Not exactly bash .env format, but something very similar.
For example:
.env
var1=1
var2="two"
.auto.tfvars
var1 = 1
var2 = "two"
Also you can use json format. Personally, i've never done it, but it's possible.
It's described in the official docs:
Terraform also automatically loads a number of variable definitions files if they are present:
Files named exactly terraform.tfvars or terraform.tfvars.json.
Any files with names ending in .auto.tfvars or .auto.tfvars.json.
You still have to declare variables though:
variable "var1" {
type = number
}
variable "var2" {
type = string
}
I ended up writing a simple PHP script to read the env file in, parse it how I wanted (explode(), remove lines with '#', no other vars etc then implode() again), and just eval that in my makefile first. Looks like this:
provision:
#eval $(php ./scripts/terraform-env-vars.php); \
terraform init ./deployment/terraform; \
terraform apply -auto-approve ./deployment/terraform/
Unfortunately terraform had the ridiculous idea that all environmental variables must be prefixed with TF_VAR_.
I solved this with a combination of grep and sed, with the idea that I would regex replace all environment variables with the required prefix.
Firstly you need to declare an input variable with the same name as the environment variable in your .tf file:
variable "MY_ENV_VAR" {
type = string
}
Then, before the terraform command, use the following:
export $(shell sed -E 's/(.*)/TF_VAR_\1/' my.env | grep -v "#" | grep -v "TF_VAR_$"); terraform apply
What this does:
Uses sed to grab each line in a capture group with (.*), then in the replacement prefixes TF_VAR with the first capture group (\1)
Uses grep to remove all lines that have a comment on (anything with a #).
Uses grep to remove all lines that only have TF_VAR.
Unfortunately this also has a bunch of other environmental variables created with the prefix TF_VAR and I'm not sure why, but this is at least a start to using .env files with terraform.
I like python, and I don't like polluting my environmental variables, i used a python3 virtual environment and the python-dotenv[cli] package. I know there's a dotenv in other languages, those may work also.
Here's what I did...
Looks something like this on my Mac
set up your virtual environment, activate it, install packages
python3 -m venv venv
source venv/bin/activate
pip install "python-dotenv[cli]"
Put your environment variables into a .env file in the local directory
foo1 = "bar1"
foo2 = "bar2"
Then when you run terraform, use the dotenv cli prefix
dotenv run terraform plan
When you're done with your virtual environment
deactivate
To start again at a later date, from your TF directory
source venv/bin/activate
This allows me to load up environment variables when i run other TF without storing them.

How to prevent the removal of a resource group when run terraform destroy?

I have already created resource group(not created using my code).
I run terraform apply and my infra was created. But when I run terraform destroy - the console says that my resource group should be deleted too. This should not happen, because not only my infra is in this resource group.
I have try to use terraform import as described here https://stackoverflow.com/a/47446540/10912908 and got the same result as before.
Also, I have tried to define the resource group with only name, but it is not work(. Terraform destroy removes this resource
resource "azurerm_resource_group" "testgroup" {
name = "Test-Group"
}
you have to not include resource group resource in the configuration for the resource group to not be destroyed (as all the resources in the configuration are to be destroyed). if you rely on outputs from that resource you can use data resource instead.
data "azurerm_resource_group" "test" {
name = "Test-Group"
}
OP also needed to remove resource group from the state file.
This bash script could work:
terraform state list | while read line
do
if [[ $line == azurerm_resource_group* ]]; then
echo $line " is a resource group and will not be deleted!"
else
echo "deleting: " $line
terraform destroy -target $line -auto-approve
fi
done
It lists all resources which are managed by terraform and then runs a delete script for every entry except for lines containing "azurerm_resource_group*"

Resources