Terraform: ignore changes to a certain environment variable - terraform

I have an AWS Lambda function I created using terraform. Code-changes are auto-deployed from our CI-server and the commit-sha is passed as an environment variable (GIT_COMMIT_HASH) - so this changes the Lambda function outside of the Terraform-scope (because people were asking...).
This works good so far. But now I wanted to update the function's node-version and terraform tries to reset the env var to the initial value of "unknown".
I tried to use the ignore_changes block but couldn't get terraform to ignore the changes made elsewhere ...
resource "aws_lambda_function" "test" {
filename = data.archive_file.helloworld.output_path
function_name = "TestName_${var.environment}"
role = aws_iam_role.test.arn
handler = "src/index.handler"
runtime = "nodejs14.x"
timeout = 1
memory_size = 128
environment {
variables = {
GIT_COMMIT_HASH = "unknown"
}
}
lifecycle {
ignore_changes = [
environment.0.variables["GIT_COMMIT_HASH"],
]
}
}
Is this possible? How do I have to reference the variable?
** edit **
Plan output looks like this:
# aws_lambda_function.test will be updated in-place
~ resource "aws_lambda_function" "test" {
# ... removed some lines
source_code_size = 48012865
tags = {}
timeout = 1
version = "12"
~ environment {
~ variables = {
~ "GIT_COMMIT_HASH" = "b7a77d0" -> "unknown"
}
}
tracing_config {
mode = "PassThrough"
}
}

I tried to replicate the issue and in my tests it works exactly as expected. I can only suspect that you are using an old version of TF, where this issue occurs. There has been numerous GitHub Issues reported regarding the limitations of ignore_changes. For example, here, here or here.
I performed tests using Terraform v0.15.3 with aws v3.31.0, and I can confirm that ignore_changes works as it should. Since this is a TF internal problem, the only way to rectify the problem, to the best of my knowledge, would be to upgrade your TF.

Related

How to re-use a terraform module in multiple configurations?

I have a terraform plan that defines most of my BQ environment.
I'm working on a cross-region deployment which will replicate some of my tables to multiple regions.
Rather than copy pasting the same module in every place that I need it at, I'd like to define the module in one place and just call that on every configuration that needs it.
Example I have the following file structure
./cross_region_tables
-> tables.tf
./foo
-> tables.tf
./bar
-> tables.tf
I'd like to define some_module in ./cross_region_tables/tables.tf like so
output "some_module" {
x = something
region = var.region
}
Then I'd like just call some_module from ./foo/tables.tf
The problem is that I don't know how to call this specific module, since ./cross_region_tables/tables.tf will contain several table definitions (as output objects). I know how to import a child module, but I don't know how to call a specific output within that child module
I've solved the issue by adding a module object to the child module with a variable for the region, then calling the child from each regional configuration and passing the region as a variable.
in child folder main.tf:
variable "region" = {}
module "foo" {
x = "something"
y = "something_else"
region = var.region
}
in regional folder for regionX
variable "region" = {
default = regionX
}
module "child" {
source = "../path/to/child"
region = var.region
}
in regional folder for regionY
variable "region" = {
default = regionY
}
module "child" {
source = "../path/to/child"
region = var.region
}
repeat for as many regions as necessary.
You can pass the provider to your modules and each provider with a different
region...
That is well documented here:
https://www.terraform.io/language/modules/develop/providers#passing-providers-explicitly
# The default "aws" configuration is used for AWS resources in the root
# module where no explicit provider instance is selected.
provider "aws" {
region = "us-west-1"
}
# An alternate configuration is also defined for a different
# region, using the alias "usw2".
provider "aws" {
alias = "usw2"
region = "us-west-2"
}
# An example child module is instantiated with the alternate configuration,
# so any AWS resources it defines will use the us-west-2 region.
module "example" {
source = "./example"
providers = {
aws = aws.usw2
}
}
The other part is what you mentioned:
The problem is that I don't know how to call this specific module, since ./cross_region_tables/tables.tf will contain several table definitions
Resources within that module (cross_region_tables) can be turned off/on with variables

Terragrunt ignores newly added outputs in a project

I've inherited a terraform project set up to deploy to the Elastic Container Service on AWS.
So far, I've greatly enjoyed working with it, and have managed to change a few things despite being very new to terraform.
Our project uses terragrunt to martial the environments and I've made changes tot he files to add environment specific settings before and it's gone great.
However, I've tried to add a whole new shiny module and... terragrunt hates it.
This is the code I've tried to add:
terraform {
source = "${path_relative_from_include()}/../modules//auto_scaling"
}
dependency "analytics_cluster" {
config_path = "../analytics_cluster"
}
dependency "analytics_app" {
config_path = "../analytics_app"
}
include {
path = find_in_parent_folders()
}
inputs = {
ecs_cluster_name = dependency.analytics_cluster.outputs.name
ecs_app_service_name = dependency.analytics_app.outputs.app_service_name
ecs_sidecar_service_name = dependency.analytics_app.outputs.sidecar_service_name
}
dependencies {
paths = [
"../analytics_cluster",
"../analytics_app",
]
}
and the error I'm getting is:
terragrunt.hcl:19,62-79: Unsupported attribute; This object does not have an attribute named "app_service_name"., and 1 other diagnostic(s)
This is a variable I added into the outputs of the module whose dependency I have set.
This is what the outputs look like:
# output "service_name" {
# value = module.analytics_app.service_name
# }
output "app_service_name" {
value = module.analytics_app.app_service
}
output "sidecar_service_name" {
value = module.analytics_app.sidecar_service
}
The weirdest thing is if I change the .hcl file to the commented out output, like so:
inputs = {
ecs_cluster_name = dependency.analytics_cluster.outputs.name
ecs_app_service_name = dependency.analytics_app.outputs.service_name
}
then this is a valid input, despite the fact that service_name is now commented out.
Why are the new output variables not being picked up? And why is the old variable which I've removed still present?
My guess is that you are only picking up the outputs of previously applied modules. This is because when a module is applied, its contents (along with an outputs.tf definition) are created in the .terragrunt-cache directory.
When you add a new variable but do not apply it, that file won't be regenerated as well, hence the missing attribute.
Try running tg apply on the dependency before running tg init on the target module.
I had this problem when I tried to run-all apply on a new repository with dependencies. Terragrunt said no outputs exist for the dependencies, but this also was because dependency modules were not applied yet. After I manually applied them - it worked.

Terraform extensible user_data in aws_instance

Good day,
Our team utilizes a module that creates Linux instances with a standard configuration in user_data as defined below.
resource "aws_instance" "this" {
...
user_data = templatefile("${path.module}/user_data.tp", { hostname = upper("${local.prefix}${count.index + 1}"), domain = local.domain })
...
}
Contents of the user_data.tp:
#cloud-config
repo_update: true
repo_upgrade: all
preserve_hostname: false
hostname: ${hostname}
fqdn: ${hostname}.${domain}
manage_etc_hosts: false
runcmd:
- 'echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg.d/99_hostname.cfg'
What is the best way to modify this module such that the contents of user_data.tp are always executed and optionally another block could be passed to install certain packages or execute certain shell scripts?
I'm assuming it involves using cloudinit_config and a multipart mime configuration, but would appreciate any suggestions.
Thank you.
Since you showed a cloud-config template I'm assuming here that you're preparing a user_data for an AMI that runs cloud-init on boot. That means this is perhaps more of a cloud-init question than a Terraform question, but I understand that you also want to know how to translate the cloud-init-specific answer into a workable Terraform configuration.
The User-data Formats documentation describes various possible ways to format user_data for cloud-init to consume. You mentioned multipart MIME in your question and indeed that could be a viable answer here if you want cloud-init to interpret the two payloads separately, rather than as a single artifact. The cloud-init docs talk about the tool make-mime, but the Terraform equivalent of that is the cloudinit_config data source belonging to the hashicorp/cloudinit provider:
variable "extra_cloudinit" {
type = object({
content_type = string
content = string
})
# This makes the variable optional to set,
# and var.extra_cloudinit will be null if not set.
default = null
}
data "cloudinit_config" "user_data" {
# set "count" to be whatever your aws_instance count is set to
count = ...
part {
content_type = "text/cloud-config"
content = templatefile(
"${path.module}/user_data.tp",
{
hostname = upper("${local.prefix}${count.index + 1}")
domain = local.domain
}
)
}
dynamic "part" {
# If var.extra_cloud_init is null then this
# will produce a zero-element list, or otherwise
# it'll produce a one-element list.
for_each = var.extra_cloudinit[*]
content {
content_type = part.value.content_type
content = part.value.content
# NOTE: should probably also set merge_type
# here to tell cloud-init how to merge these
# two:
# https://cloudinit.readthedocs.io/en/latest/topics/merging.html
}
}
}
resource "aws_instance" "example" {
count = length(data.cloudinit_config.user_data)
# ...
user_data = data.cloudinit_config.user_data[count.index].rendered
}
If you expect that the extra cloud-init configuration will always come in the form of extra cloud-config YAML values then an alternative approach would be to merge the two data structures together within Terraform and then yamlencode the merged result:
variable "extra_cloudinit" {
type = any
# This makes the variable optional to set,
# and var.extra_cloudinit will be null if not set.
default = {}
validation {
condition = can(merge(var.extra_cloudinit, {}))
error_message = "Must be an object to merge with the built-in cloud-init settings."
}
}
locals {
cloudinit_config = merge(
var.extra_cloudinit,
{
repo_update = true
repo_upgrade = "all"
# etc, etc
},
)
}
resource "aws_instance" "example" {
count = length(data.cloudinit_config.user_data)
# ...
user_data = <<EOT
#!cloud-config
${yamlencode(local.cloudinit_config)}
EOT
}
A disadvantage of this approach is that Terraform's merge function is always a shallow merge only, whereas cloud-init itself has various other merging options. However, an advantage is that the resulting single YAML document will generally be simpler than a multipart MIME payload and thus probably easier to review for correctness in the terraform plan output.

terraform variable default value interpolation from locals

I have a use case where I need two AWS providers for different resources. The default aws provider is configured in the main module which uses another module that defines the additional aws provider.
By default, I'd like both providers to use the same AWS credentials unless explicitly overridden.
I figured I could do something like this. In the main module:
locals {
foo_cloud_access_key = aws.access_key
foo_cloud_secret_key = aws.secret_key
}
variable "foo_cloud_access_key" {
type = string
default = local.foo_cloud_access_key
}
variable "foo_cloud_secret_key" {
type = string
default = local.foo_cloud_secret_key
}
where variables foo_cloud_secret_key and foo_cloud_access_key would then be passed down to the child module like this:
module foobar {
...
foobar_access_key = var.foo_cloud_access_key
foobar_secret_key = var.foo_cloud_secret_key
...
}
Where module foobar would then configure its additional was provide with these variables:
provider "aws" {
alias = "foobar_aws"
access_key = var.foobar_access_key
secret_key = var.foobar_secret_key
}
When I run the init terraform spits out this error (for both variables):
Error: Variables not allowed
on variables.tf line 66, in variable "foo_cloud_access_key":
66: default = local.foo_cloud_access_key
Variables may not be used here.
Is it possible to achieve something like this in terraform or is there any other way to go about this?
Having complex, computed default values of variables is possible, but only with a workaround:
define a dummy default value for the variable, e.g. null
define a local variable, its value is either the value of the variable or the actual default value
variable "something" {
default = null
}
locals {
some_computation = ... # based on whatever data you want
something = var.something == null ? local.some_computation : var.something
}
And then only only use local.something instead of var.something in the rest of the terraform files.

Terraform updating a one of many ECS service/task

Happy Friday! hoping someone can help me with this issue or point out the flaws in my thinking.
$ terraform --version
Terraform v0.12.7
+ provider.aws v2.25.0
+ provider.template v2.1.2
Preface
This is my first time using Terraform. We have an existing AWS ECS/Fargate environment up and running, this is in a 'test' environment. We recently (e.g. after setting up the test env) started to use Terraform for IaC purposes.
Current Config
The environment has a single ECS cluster, we're using FARGATE but I'm not sure that matters for this question. The cluster has several services, each service has a single task (docker image) associated with it - so they can be individually scaled. Each docker image has its own Repo.
What I'm trying to do
So with Terraform I was hoping to be able to create, update and destroy the environment. Creating/destroying seems fairly straight forward, however; I'm hitting a road-block for updating.
As I said each task has its own repo, when a pull request is made against the repo our CI platform (CircleCI if that matters) builds the new docker image, tags it and pushes it. Then we use an API call to trigger a build of the Terraform Repo passing the name of the service/task that was updated.
Problem
The problem we're facing is that when going through the services (described below) I can't figure out how to get Terraform to either ignore the services that are not being updated, or how I can provide the correct container_definitions in the aws_ecs_task_definition, specifically the current image tag (we don't use the latest tag). So I'm trying to figure out how I can get the latest container information (tag) or just tell Terraform to skip the unmodified task.
Terraform Script
Here is a stripped down version of what I have tried, this is in a module called ecs.tf, the var.ecs_svc_names is a list of the service names. I have removed some elements as I don't think they pertain to this issue and having them makes this very large.
CAVEATS
I have not run the Terraform 'script' as shown below due to the issues I am asking about, so my syntax maybe a off. Sorry if that is the case, hopefully this will show you what I'm trying to do....
ecs.tf
/* ecs_service_names is passed in, but here is its definition:
variable "ecs_service_names" {
type = list(string)
description = "This is a list/array of the images/services that are contained in the cluster"
default = [
"main",
"sub-service1",
"sub-service2"]
}
*/
locals {
numberOfServices = length(var.ecs_svc_names)
}
resource "aws_ecs_cluster" "ecs_cluster" {
name = "${var.env_type}-ecs-cluster"
}
// Create the service objects (1 for each service)
resource "aws_ecs_service" "ecs-service" {
// How many services are being created
count = local.numberOfServices
name = var.ecs_svc_names[count.index]
cluster = aws_ecs_cluster.ecs_cluster.id
definition[count.index].family}:${max(aws_ecs_task_definition.ecs-task-definition[count.index].revision, data.aws_ecs_task_definition.ecs-task-def.revision)}"
desired_count = 1
launch_type = "FARGATE"
// stuff removed
}
resource "aws_ecs_task_definition" "ecs-task-definition" {
// How many tasks. There is a 1-1 relationship between tasks and services
count = local.numberOfServices
family = var.ecs_svc_names[count.index]
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
// cpu/memory stuff removed
task_role_arn = var.ecs_task_role_arn
container_definitions = data.template_file.ecs_containers_json[count.index].rendered
}
data.tf
locals {
numberOfServices = length(var.ecs_svc_names)
}
data "aws_ecs_task_definition" "ecs-task-def" {
// How many services are being created, 1-1 relationship between tasks and services
count = local.numberOfServices
task_definition = aws_ecs_task_definition.ecs-task-definition[count.index].family
depends_on = [
"aws_ecs_task_definition.ecs-task-definition",
]
}
data "template_file" "ecs_containers_json" {
// How many tasks. There is a 1-1 relationship between tasks and services
count = local.numberOfServices
template = file("${path.module}/container.json.template")
vars = {
// vars removed
image = aws_ecs_task_definition.ecs-task-definition[count.index].family
// This is where I hit the road-block, how do I get the current docker tag from Terraform?
tag = var.ecs_svc_name_2_update == var.ecs_svc_names[count.index]
? var.ecs_svc_image_tag
: data.aws_ecs_task_definition.ecs-task-def[count.index].
}
I didn't post the JSON document, if you need that I can provide it...
Thank you
It is necessary to pass the updated image attribute in the container definition of the task definition revision.
You can data source the container definition of the current task revision which is used by the service and pass it to the terraform. You may follow the code below.
data "template_file" "example" {
template = "${file("${path.module}/example.json")}"
vars {
image = "${data.aws_ecs_container_definition.example.image}"
}
}
resource "aws_ecs_task_definition" "example" {
family = "${var.project_name}-${var.environment_name}-example"
container_definitions = "${data.template_file.example.rendered}"
cpu = 192
memory = 512
}
data "aws_ecs_container_definition" "example" {
task_definition = "${var.project_name}-${var.environment_name}-example"
container_name = "example"
}

Resources