How do I set a computed local variable - terraform

I have a tf.json file that declares a bunch of local variables. One of the variables is a array of complex objects like so:
{
"locals": [
{
"ordered_cache_behaviors": [
{
"path_pattern": "/poc-app-angular*",
"s3_target": "dev-ui-static",
"ingress": "external"
}
]
}
]
}
Here is what I want to do... instead of declaring the variable ordered_cache_behaviors statically in my file I want this to be a computed value. I will get this configuration from a S3 bucket and set the value here. So the value statically will only be an empty array [] that I will append to with a script after getting the data from S3.
This logic needs to execute each time before a terraform plan or terraform apply. What is the best way to do this? I am assuming I need to use a Provisioner to fire off a script? If so how do I then set the local variable here?

If the cache configuration data can be JSON-formatted, you may be able to use the s3_bucket_object datasource plus the jsondecode function as an alternative approach:
Upload your cache configuration data to the poc-app-cache-config bucket as cache-config.json, and then use the following to have Terraform download that file from S3 and parse it into your local ordered_cache_behaviors variable:
data "aws_s3_bucket_object" "cache_configuration" {
bucket = "poc-app-cache-config"
key = "cache-config.json" # JSON-formatted cache configuration map
}
...
locals {
ordered_cache_behaviors = jsondecode(aws_s3_bucket_object.cache_configuration.body)
}

Related

Is there a way to extend a variables.tf file with another?

Just an example scenario would be that I have a module, and that has variables. For example an S3 bucket would need a variable about the name of the bucket.
The variables.tf file would look like:
variable "bucket_name" { type = string }
Now if I want to use that module to create the S3 bucket, and then assign it to a cloudfront distribution, I have to assign the bucket name at that level to the bucket module. For example I want to give the distribution a name, I need to create a variables.tf for the distribution with the distribution name defined, but because I use the module of the S3 bucket, I have to include all variables of the S3 bucket as well. So the variables.tf file of the distribution would look like:
variable "bucket_name" { type = string }
variable "distribution_name" { type = string }
If I have a module that is only for us-east-1, and it needs the cloudfront distribution, I need to create a variables.tf file for that as well containing all the above.
If I for example have multiple AWS accounts for staging, and prod, and want the distribution included in both (or for only one, doesn't matter), I have to create yet another variables.tf file containing all the above.
Every time I would add a new variable, I'd have to update at least 4 variables.tf files which is definitely not a way (terraform is supposed to reduce hard coding).
What I would like is as I go up the ladder (from s3 to environment module), Import the variables.tf file of the "child module", and extend it.
Just for an imaginary scenario, I'd like to:
# bucket/variables.tf
variable "bucket_name" { type = string }
# distribution/variables.tf
import_variables {
source = "../bucket"
}
# us-east-1/variables.tf
import_variables {
source = "../distribution"
}
# staging/us-east-1/variables.tf
import_variables {
source = "../../us-east-1"
}
Am I having a completely wrong approach to terraform, or I just missed a method with which this variable definition sharing could be done?

How do I apply a CRD from github to a cluster with terraform?

I want to install a CRD with terraform, I was hoping it would be easy as doing this:
data "http" "crd" {
url = "https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml"
request_headers = {
Accept = "text/plain"
}
}
resource "kubernetes_manifest" "install-crd" {
manifest = data.http.crd.body
}
But I get this error:
can't unmarshal tftypes.String into *map[string]tftypes.Value, expected
map[string]tftypes.Value
Trying to convert it to yaml with yamldecode also doesn't work because yamldecode doesn't support multi-doc yaml files.
I could use exec, but I was already doing that while waiting for the kubernetes_manifest resource to be released. Does kubernetes_manifest only support a single resource or can it be used to create several from a raw text manifest file?
kubernetes_manifest (emphasis mine)
Represents one Kubernetes resource by supplying a manifest attribute
That sounds to me like it does not support multiple resources / a multi doc yaml file.
However you can manually split the incoming document and yamldecode the parts of it:
locals {
yamls = [for data in split("---", data.http.crd.body): yamldecode(data)]
}
resource "kubernetes_manifest" "install-crd" {
count = length(local.yamls)
manifest = local.yamls[count.index]
}
Unfortunately on my machine this then complains about
'status' attribute key is not allowed in manifest configuration
for exactly one of the 11 manifests.
And since I have no clue of kubernetes I have no idea what that means or wether or not it needs fixing.
Alternatively you can always use a null_resource with a script that fetches the yaml document and uses bash tools or python or whatever is installed to convert and split and filter the incoming yaml.
I got this to work using kubectl provider. Eventually kubernetes_manifest should work as well, but it is currently (v2.5.0) still beta and has some bugs. This example only uses kind+name, but for full uniqueness, it should also include the API and the namespace params.
resource "kubectl_manifest" "cdr" {
# Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
# Each element is a separate kubernetes resource.
# Must use \n---\n to avoid splitting on strings and comments containing "---".
# YAML allows "---" to be the first and last line of a file, so make sure
# raw yaml begins and ends with a newline.
# The "---" can be followed by spaces, so need to remove those too.
# Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
for_each = {
for pair in [
for yaml in split(
"\n---\n",
"\n${replace(data.http.crd.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
) :
[yamldecode(yaml), yaml]
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${pair.0["kind"]}--${pair.0["metadata"]["name"]}" => pair.1
}
yaml_body = each.value
}
Once Hashicorp fixes kubernetes_manifest, I would recommend using the same approach. Do not use count+element() because if the ordering of the elements change, Terraform will delete/recreate many resources without needed it.
resource "kubernetes_manifest" "crd" {
for_each = {
for value in [
for yaml in split(
"\n---\n",
"\n${replace(data.http.crd.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
) :
yamldecode(yaml)
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${value["kind"]}--${value["metadata"]["name"]}" => value
}
manifest = each.value
}
P.S. Please support Terraform feature request for multi-document yamldecode. Will make things far easier than the above regex.
Terraform can split a multi-resource yaml (---) for you (docs):
# fetch a raw multi-resource yaml
data "http" "knative_serving_crds" {
url = "https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml"
}
# split raw yaml into individual resources
data "kubectl_file_documents" "knative_serving_crds" {
content = data.http.knative_serving_crds.body
}
# apply each resource from the yaml one by one
resource "kubectl_manifest" "knative_serving_crds" {
depends_on = [kops_cluster_updater.updater]
for_each = data.kubectl_file_documents.knative_serving_crds.manifests
yaml_body = each.value
}

There is no variable named "REGION"

The ec2 launch tempate userdata definition using the template provider is about to be replaced with a templatefile function as it is deprecated.
The definition itself can be done without any problems, but an error occurs during the planning process saying that the processing variables for getting instance metadata etc. are undefined, is there any way to avoid this?
$ terraform version
Terraform v0.13.5
data "template_file" "userdata" {
template = templatefile("${path.module}/userdata.sh.tpl",
vars....
)
}
resource "aws_launch_template" "sample" {
....
user_data = base64encode(data.template_file.userdata.rendered)
....
}
The following definitions are applicable processing
export LOCAL_IP=$(ec2metadata --local-ipv4)
export GLOBAL_IP=$(ec2metadata --public-ipv4)
export REGION=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
export AWS_DEFAULT_REGION="$${REGION}"
$ terraform plan
Error: failed to render : <template_file>:13,30-36: Unknown variable; There is no variable named "REGION"., and 4 other diagnostic(s)
thanks
It doesn't make sense to use the template_file data source and the templatefile function together, because these both do the same thing: render a template.
In your current configuration, you have the templatefile function rendering the template to produce a string, and then the template_file function interpreting that result as if it were a template, and so your template is being evaluated twice.
The first evaluation will turn $${REGION} into ${REGION} as expected, and then the second evaluation will try to handle ${REGION} as a template interpolation.
Instead, remove the template_file data source altogether (it's deprecated) and just use the templatefile function:
user_data = templatefile("${path.module}/userdata.sh.tpl, {
# ...
})
(I removed the base64encode function here because user_data expects an unencoded string value where the provider itself will do the base64 encoding. If you call base64encode yourself here then you'll end up with two levels of base64 encoding, which I imagine is not what you intended.)

Terraform empty and non-empty block map variables

I want to make backend-service by using Terraform. I use resource_type google_compute_backend_service
Now, i have 2 backend-services created by gcloud command. The one is using cdn_policy block and another one doesn't use cdn_policy.
The first backend-services tfstate is like
...
"cdn_policy": [
{
"cache_key_policy": [],
"signed_url_cache_max_age_sec": 3600
}
]
...
And the second backend-services is like
"cdn_policy": []
How to create the terraform script works for both of them ? So, terraform script can run for backend-services who has cdn_policy include with its block map and can also run for backend-services without cdn_policy.
In my idea, i can create 2 terraform scripts. First for cdn_policy and second without cdn_policy. But, i think this is not best practice.
If i put cdn_policy = [], it would result error An argument named "cdn_policy" is not expected here
You can use dynamic blocks to create a set of blocks based on a list of objects in an input variable: Dynamic Blocks
resource "google_compute_backend_service" "service" {
...
dynamic "cdn_policy" {
for_each = var.cdn_policy
content {
cache_key_policy = cdn_policy.value.cache_key_policy
signed_url_cache_max_age_sec = cdn_policy.value.signed_url_cache_max_age_sec
}
}
}

File not found when trying to use etag

I am trying to use etag when i update my bucket S3, but i get this error:
Error: Error in function call
on config.tf line 48, in resource "aws_s3_bucket_object" "bucket_app":
48: etag = filemd5("${path.module}/${var.env}/app-config.json")
|----------------
| path.module is "."
| var.env is "develop"
Call to function "filemd5" failed: no file exists at develop/app-config.json.
However, this works fine:
resource "aws_s3_bucket_object" "bucket_app" {
bucket = "${var.app}-${var.env}-app-assets"
key = "config.json"
source = "${path.module}/${var.env}/app-config.json"
// etag = filemd5("${path.module}/${var.env}/app-config.json")
depends_on = [
local_file.app_config_json
]
}
I am genereting the file this way:
resource "local_file" "app_config_json" {
content = local.app_config_json
filename = "${path.module}/${var.env}/app-config.json"
}
I really don't get what i am doing wrong...
If you happen to arrive here and are using an archive_file Data Source, there is an exported attribute called output_md5. This seems to provide the same results that you would get from filemd5(data.archive_file.app_config_json.output_path).
Here is a full example:
data archive_file config {
type = "zip"
output_path = "${path.module}/config.zip"
source {
filename = "config/template-configuration.json"
content = "some content"
}
}
resource aws_s3_bucket_object config{
bucket = aws_s3_bucket.stacks.bucket
key = "config.zip"
content_type = "application/zip"
source = data.archive_file.config.output_path
etag = data.archive_file.config.output_md5
}
All functions in Terraform run during the initial configuration processing, not during the graph walk. For all of the functions that read files on disk, that means that the files must be present on disk prior to running Terraform as part of the configuration itself -- usually, checked in to version control -- rather than being generated dynamically during the Terraform operation.
The documentation for [file], which filemd5 builds on, has the following to say about it:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
As the documentation there suggests, the local_file data source provides a way to read a file into memory as a resource during the graph walk, although its result would still need to be passed to md5 to get the result you needed here.
Because you're creating the file with a local_file resource anyway, you can skip the need for the additional data resource and derive the MD5 hash directly from your local_file.app_config_json resource:
resource "aws_s3_bucket_object" "bucket_app" {
bucket = "${var.app}-${var.env}-app-assets"
key = "config.json"
source = local_file.app_config_json.filename
etag = md5(local_file.app_config_json.content)
}
Note that we don't need to use depends_on if we derive the configuration from attributes of the local_file.app_config_json resource, because Terraform can therefore already see that the dependency relationship exists.

Resources