Cannot get the outputs when terraform output is run - terraform

I am making use of modules. And this is the structure of my files - Provisioner and module are different folders.The main.tf in stack calls out the modules.
> provisioner
>stack
|--main.tf
|--variables.tf
> module (folder)
|--aks
| |--main.tf
| |--outputs.tf
| |--variables.tf
|
|--postgresql
| |--main.tf
| |--outputs.tf
| |--variables.tf
When i run "terraform apply" command in the provsioner directory,it is expected to return the outputs after the apply is done. I dont get outputs.
When i run 'terraform output' i get- "The state file either has no outputs defined, or all the defined outputs are empty. Please define an output in your configuration with the output keyword and run terraform refresh for it to become available. If you are using interpolation, please verify the interpolated value is not empty"
I would want to know why is this happening?

Terraform 0.12 and later intentionally track only root module outputs in the state. To expose module outputs for external consumption, you must export them from the root module using an output block, which as of 0.12 can now be done for a single module all in one output, like this:
output "example_module" {
value = module.example_module
}
So for your code, add a output.tf file in the root and then add output statement whichever you need output after apply.
Github issue : https://github.com/hashicorp/terraform/issues/22126

Adding on to #crewy_stack answer. let's say your module is named as sample_ec2_mod. Inside the module directory ensure that the outputs are specified in outputs.tf
On the main.tf in the root folder, add the following:
module "sample_ec2_mod" {
source = "./ec2"
}
output "ec2_module" {
value = module.sample_ec2_mod
}
When you enter terraform apply in your cli, it should give you
an output option. After applying, simply use terraform output to see
the outputs.

Output vars of modules won't be printed out by default. You have to explicitly define the output vars in your provisioner dir.

Related

parse error when trying to parse terraform output in GitHub Actions workflow with jq

I have a GitHub Actions workflow that reads output from a terraform configuration. I'm trying to do this:
terraform -chdir=terraform/live/dev output -json > /tmp/output.json
APP_URL=$(cat /tmp/output.json | jq -r '.app_url.value')
I'm getting the following error in the GitHub Action logs:
parse error: Invalid numeric literal at line 1, column 9
I added the following to debug this:
# debugging output.json file
echo "output.json:"
cat /tmp/output.json
And I'm finding that output of cat /tmp/output.json is:
/home/runner/work/_temp/2b622f60-be99-4a29-a295-593b06dde9a8/terraform-bin -chdir=terraform/live/dev output -json
{
"app_url": {
"sensitive": false,
"type": "string",
"value": "https://app.example.com"
}
}
This tells me that jq can't parse the temporary file that I wrote the terraform JSON output to because it seems to be adding the command to the file itself:
/home/runner/work/_temp/2b622f60-be99-4a29-a295-593b06dde9a8/terraform-bin -chdir=terraform/live/dev output -json
How can I get the terraform output as JSON and write it to a file without the extra header line that is causing the parse error?
When I run the same commands locally, I do not get this parse error.
Here's the code for the section of my GitHub Action workflow that is producing this error: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/terraform_frontend_update.yml#L72-L74
Things I have tried
using cd terraform/live/dev instead of -chdir=terraform/live/dev - this resulted in the same error
I was able to fix this issue by adding the following setting to the setup-terraform/#v1 action:
- uses: hashicorp/setup-terraform#v1
with:
terraform_version: 1.1.7
terraform_wrapper: false <-- added this
More documentation about this setting can be found here: https://github.com/hashicorp/setup-terraform#inputs
Whilst #briancaffey's answer is exactly right, if you need to use the terraform_wrapper for other parts of your workflow (e.g. using the output), you can switch out the problematic terraform calls with terraform-bin instead.
If you also want to run the script outside GitHub Actions, the following workaround will do the trick:
tf=terraform-bin
type "$tf" >/dev/null 2>&1 || tf=terraform
$tf output -json > /tmp/output.json
See https://github.com/hashicorp/setup-terraform/issues/167#issuecomment-1090760365 for an issue that mentions the same workaround.

How to add an input variable in terraform terraform.tfvars from JSON file?

How to add an input variable in terraform terraform.tfvars from JSON file ?
I have written a Terraform code for a snowflake. Now I have defined the input variables manually in terraform.tfvars but I would like to integrate the input in JSON format as an input variable for terraform.tfvars
Does anyone have any example like get an input variable in JSON format and include in terraform.tfvars and run terraform code.
Thanks for your support.
The easiest way would be json2hcl. All what you need is intall the tool
curl -SsL https://github.com/kvz/json2hcl/releases/download/v0.0.6/json2hcl_v0.0.6_darwin_amd64 \
| sudo tee /usr/local/bin/json2hcl > /dev/null && sudo chmod 755 /usr/local/bin/json2hcl && json2hcl -version
and then run it as follows:
$ json2hcl < fixtures/infra.tf.json
"output" "arn" {
"value" = "${aws_dynamodb_table.basic-dynamodb-table.arn}"
}
... rest of HCL truncated

Why is `terraform fmt` outputting a filename when it doesn't appear to have changed anything?

I have a file, A, committed into git.
If I do a terraform fmt --check --recursive . it outputs A.
However, if I then do a git diff I get blank output and git status reports no changes.
If I re-run terraform fmt --check --recursive . it again outputs A.
Any suggestions what's going wrong?
My understanding from https://www.terraform.io/docs/cli/commands/fmt.html
is that it will only output a filename if it has changed that file.
EXAMPLE
resource "aws_vpc" "test_vpc" {
cidr_block = "192.168.0.0/16"
instance_tenancy = "default"
}
The issue is with this line: instance_tenancy. Looking at it in vi there are no odd characters that I can see.
Terraform v0.15.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.37.0
-check flag also implies -write=false.
From Terraform doc:
-write=false - Don't overwrite the input files. (This is implied by -check or when the input is STDIN.)

How can I read environment variables from a .env file into a terraform script?

I'm building a lambda function on aws with terraform. The syntax within the terraform script for uploading an env var is:
resource "aws_lambda_function" "name_of_function" {
...
environment {
variables = {
foo = "bar"
}
}
}
Now I have a .env file in my repo with a bunch of variables, e.g. email='admin#example.com', and I'd like terraform to read (some of) them from that file and inject them into the space for env vars to be uploaded and available to the lambda function. How can this be done?
This is a pure TerraForm-only solution that parses the .env file into a native TerraForm map.
output "foo" {
value = { for tuple in regexall("(.*)=(.*)", file("/path/to/.env")) : tuple[0] => tuple[1] }
}
I have defined it as an output to do quick tests via CLI but you can always define it as a local or directly on an argument.
My solution for now involves 3 steps.
Place your variables in a .env file like so:
export TF_VAR_FOO=bar
Yes, it's slightly annoying that you HAVE TO prefix your vars with TF_VAR_, but I've learned to live with it (at least it makes it clear in your .env what vars will be used by terraform, and which will not.)
In your TF script, you have to declare any such variable without the TF_VAR_ prefix. E.g.
variable "FOO" {
description = "Optionally say something about this variable"
}
Before you run terraform, you need to source your .env file (. .env) so that any such variable is available to processes you want to launch in your present shell environment (viz. terraform here). Adding this step doesn't bother me since I always operate my repo with bash scripts anyway.
Note: I put in a request for a better way to handle .env files here though I'm actually now quite happy with things as is (it was just poorly described in the official documentation how TF relates to .env files IMO).
Based on #Uizz Underweasel response, I just add the sensitive function for security purposes:
Definition in variables.tf - more secure
locals {
envs = { for tuple in regexall("(.*)=(.*)", file(".env")) : tuple[0] => sensitive(tuple[1]) }
}
Definition in variables.tf - less secure
locals {
envs = { for tuple in regexall("(.*)=(.*)", file(".env")) : tuple[0] => tuple[1] }
}
An example of .env
HOST=127.0.0.1
USER=my_user
PASSWORD=pass123
The usage
output "envs" {
value = local.envs["HOST"]
sensitive = true # this is required if the sensitive function was used when loading .env file (more secure way)
}
Building on top of the great advice from #joe-jacobs, to avoid having to prepend to all variables with the TF_VAR_ prefix, encapsulate the call to the terraform command with the excellent https://github.com/cloudposse/tfenv utility.
This means you can leave the vars defined as FOO=bar in the .env file, useful to re-use them for other purposes.
Then run a command like:
dotenv run tfenv terraform plan
Terraform will be able to find an env variable TF_VAR_FOO set to bar. 🎉
You could use terraform.tfvars or .auto.tfvars file. The second one is probably more alike the .env, because it's hidden too.
Not exactly bash .env format, but something very similar.
For example:
.env
var1=1
var2="two"
.auto.tfvars
var1 = 1
var2 = "two"
Also you can use json format. Personally, i've never done it, but it's possible.
It's described in the official docs:
Terraform also automatically loads a number of variable definitions files if they are present:
Files named exactly terraform.tfvars or terraform.tfvars.json.
Any files with names ending in .auto.tfvars or .auto.tfvars.json.
You still have to declare variables though:
variable "var1" {
type = number
}
variable "var2" {
type = string
}
I ended up writing a simple PHP script to read the env file in, parse it how I wanted (explode(), remove lines with '#', no other vars etc then implode() again), and just eval that in my makefile first. Looks like this:
provision:
#eval $(php ./scripts/terraform-env-vars.php); \
terraform init ./deployment/terraform; \
terraform apply -auto-approve ./deployment/terraform/
Unfortunately terraform had the ridiculous idea that all environmental variables must be prefixed with TF_VAR_.
I solved this with a combination of grep and sed, with the idea that I would regex replace all environment variables with the required prefix.
Firstly you need to declare an input variable with the same name as the environment variable in your .tf file:
variable "MY_ENV_VAR" {
type = string
}
Then, before the terraform command, use the following:
export $(shell sed -E 's/(.*)/TF_VAR_\1/' my.env | grep -v "#" | grep -v "TF_VAR_$"); terraform apply
What this does:
Uses sed to grab each line in a capture group with (.*), then in the replacement prefixes TF_VAR with the first capture group (\1)
Uses grep to remove all lines that have a comment on (anything with a #).
Uses grep to remove all lines that only have TF_VAR.
Unfortunately this also has a bunch of other environmental variables created with the prefix TF_VAR and I'm not sure why, but this is at least a start to using .env files with terraform.
I like python, and I don't like polluting my environmental variables, i used a python3 virtual environment and the python-dotenv[cli] package. I know there's a dotenv in other languages, those may work also.
Here's what I did...
Looks something like this on my Mac
set up your virtual environment, activate it, install packages
python3 -m venv venv
source venv/bin/activate
pip install "python-dotenv[cli]"
Put your environment variables into a .env file in the local directory
foo1 = "bar1"
foo2 = "bar2"
Then when you run terraform, use the dotenv cli prefix
dotenv run terraform plan
When you're done with your virtual environment
deactivate
To start again at a later date, from your TF directory
source venv/bin/activate
This allows me to load up environment variables when i run other TF without storing them.

Change VM size after provisioning

How can I downsize a virtual machine after its provisioning, from terraform script? Is there a way to update a resource without modifying the initial .tf file?
I have a solution, maybe you could try.
1.Copy your tf file, for example cp vm.tf vm_back.tf and move vm.tf to another directory.
2.Modify vm_size in vm_back.tf. I use this tf file, so I use the following command to change the value.
sed -i 's/vm_size = "Standard_DS1_v2"/vm_size = "Standard_DS2_v2"/g' vm_back.tf
3.Update VM size by executing terraform apply.
4.Remove vm_back.tf and mv vm.tf to original directory.
How about passing in a command line argument that is used in a conditional variable?
For example, declare a conditional value in your .tf file:
vm_size = "${var.vm_size == "small" ? var.small_vm : var.large_vm}"
And when you want to provision the small VM, you simply pass the vm_size variable in on the command line:
$ terraform apply -var="vm_size=small"

Resources