Often, I've found myself in the scenario where I want to create a resource with Terraform and want to set, for example, an environment variable on this resource which is only known at a later stage, when the resource is created.
Let's say I want to create a google_cloud_run_service and want to set an environment variable in the container, that represents the url from which the app can be approached:
resource "google_cloud_run_service" "test_app" {
name = "test-app"
location = var.region
template {
spec {
containers {
image = "gcr.io/myimage:latest"
env {
name = "CURRENT_HOST"
value = google_cloud_run_service.test_app.status[0].url
}
}
}
}
}
This however is not allowed, as the service is not yet created. Is there a way to accomplish this?
Related
I have a terraform plan that defines most of my BQ environment.
I'm working on a cross-region deployment which will replicate some of my tables to multiple regions.
Rather than copy pasting the same module in every place that I need it at, I'd like to define the module in one place and just call that on every configuration that needs it.
Example I have the following file structure
./cross_region_tables
-> tables.tf
./foo
-> tables.tf
./bar
-> tables.tf
I'd like to define some_module in ./cross_region_tables/tables.tf like so
output "some_module" {
x = something
region = var.region
}
Then I'd like just call some_module from ./foo/tables.tf
The problem is that I don't know how to call this specific module, since ./cross_region_tables/tables.tf will contain several table definitions (as output objects). I know how to import a child module, but I don't know how to call a specific output within that child module
I've solved the issue by adding a module object to the child module with a variable for the region, then calling the child from each regional configuration and passing the region as a variable.
in child folder main.tf:
variable "region" = {}
module "foo" {
x = "something"
y = "something_else"
region = var.region
}
in regional folder for regionX
variable "region" = {
default = regionX
}
module "child" {
source = "../path/to/child"
region = var.region
}
in regional folder for regionY
variable "region" = {
default = regionY
}
module "child" {
source = "../path/to/child"
region = var.region
}
repeat for as many regions as necessary.
You can pass the provider to your modules and each provider with a different
region...
That is well documented here:
https://www.terraform.io/language/modules/develop/providers#passing-providers-explicitly
# The default "aws" configuration is used for AWS resources in the root
# module where no explicit provider instance is selected.
provider "aws" {
region = "us-west-1"
}
# An alternate configuration is also defined for a different
# region, using the alias "usw2".
provider "aws" {
alias = "usw2"
region = "us-west-2"
}
# An example child module is instantiated with the alternate configuration,
# so any AWS resources it defines will use the us-west-2 region.
module "example" {
source = "./example"
providers = {
aws = aws.usw2
}
}
The other part is what you mentioned:
The problem is that I don't know how to call this specific module, since ./cross_region_tables/tables.tf will contain several table definitions
Resources within that module (cross_region_tables) can be turned off/on with variables
New to terraform, and have been building out the infrastructure recently.
I am trying to pull secrets from azure key vault and assign the keys to the variables.tf file depending on the environment(dev.tfvars, test.tfvars, etc). However when I execute the plan with the tfvar file as the parameter, I get an error with the following message:
Error: Variables not allowed
Here are the files and the relevant contents of it.
variables.tf:
variable "user_name" {
type = string
sensitive = true
}
data.tf (referencing the azure key vault):
data "azurerm_key_vault" "test" {
name = var.key_vault_name
resource_group_name = var.resource_group
}
data "azurerm_key_vault_secret" "test" {
name = "my-key-vault-key-name"
key_vault_id = data.azurerm_key_vault.test.id
}
test.tfvars:
user_name = "${data.azurerm_key_vault_secret.test.value}" # Where the error occurrs
Can anyone point out what I'm doing wrong here? And if so is there another way to achieve such a thing?
In Terraform a variable can be used for user input only. You can not assign to them anything dynamically computed from your code. They are like read-only arguments, for more info see Input Variables from the doc.
If you want to assign a value to something for later use, you must use locals. For example:
locals {
user_name = data.azurerm_key_vault_secret.test.value
}
Local values can be changed dynamically during execution. For more info, see Local Values.
You can't create dynamic variables. All variables must have known values before execution of your code. The only thing you could do is to use local, instead of variabile:
locals {
user_name = data.azurerm_key_vault_secret.test.value
}
and then refer to it as local.user_name.
Happy Friday! hoping someone can help me with this issue or point out the flaws in my thinking.
$ terraform --version
Terraform v0.12.7
+ provider.aws v2.25.0
+ provider.template v2.1.2
Preface
This is my first time using Terraform. We have an existing AWS ECS/Fargate environment up and running, this is in a 'test' environment. We recently (e.g. after setting up the test env) started to use Terraform for IaC purposes.
Current Config
The environment has a single ECS cluster, we're using FARGATE but I'm not sure that matters for this question. The cluster has several services, each service has a single task (docker image) associated with it - so they can be individually scaled. Each docker image has its own Repo.
What I'm trying to do
So with Terraform I was hoping to be able to create, update and destroy the environment. Creating/destroying seems fairly straight forward, however; I'm hitting a road-block for updating.
As I said each task has its own repo, when a pull request is made against the repo our CI platform (CircleCI if that matters) builds the new docker image, tags it and pushes it. Then we use an API call to trigger a build of the Terraform Repo passing the name of the service/task that was updated.
Problem
The problem we're facing is that when going through the services (described below) I can't figure out how to get Terraform to either ignore the services that are not being updated, or how I can provide the correct container_definitions in the aws_ecs_task_definition, specifically the current image tag (we don't use the latest tag). So I'm trying to figure out how I can get the latest container information (tag) or just tell Terraform to skip the unmodified task.
Terraform Script
Here is a stripped down version of what I have tried, this is in a module called ecs.tf, the var.ecs_svc_names is a list of the service names. I have removed some elements as I don't think they pertain to this issue and having them makes this very large.
CAVEATS
I have not run the Terraform 'script' as shown below due to the issues I am asking about, so my syntax maybe a off. Sorry if that is the case, hopefully this will show you what I'm trying to do....
ecs.tf
/* ecs_service_names is passed in, but here is its definition:
variable "ecs_service_names" {
type = list(string)
description = "This is a list/array of the images/services that are contained in the cluster"
default = [
"main",
"sub-service1",
"sub-service2"]
}
*/
locals {
numberOfServices = length(var.ecs_svc_names)
}
resource "aws_ecs_cluster" "ecs_cluster" {
name = "${var.env_type}-ecs-cluster"
}
// Create the service objects (1 for each service)
resource "aws_ecs_service" "ecs-service" {
// How many services are being created
count = local.numberOfServices
name = var.ecs_svc_names[count.index]
cluster = aws_ecs_cluster.ecs_cluster.id
definition[count.index].family}:${max(aws_ecs_task_definition.ecs-task-definition[count.index].revision, data.aws_ecs_task_definition.ecs-task-def.revision)}"
desired_count = 1
launch_type = "FARGATE"
// stuff removed
}
resource "aws_ecs_task_definition" "ecs-task-definition" {
// How many tasks. There is a 1-1 relationship between tasks and services
count = local.numberOfServices
family = var.ecs_svc_names[count.index]
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
// cpu/memory stuff removed
task_role_arn = var.ecs_task_role_arn
container_definitions = data.template_file.ecs_containers_json[count.index].rendered
}
data.tf
locals {
numberOfServices = length(var.ecs_svc_names)
}
data "aws_ecs_task_definition" "ecs-task-def" {
// How many services are being created, 1-1 relationship between tasks and services
count = local.numberOfServices
task_definition = aws_ecs_task_definition.ecs-task-definition[count.index].family
depends_on = [
"aws_ecs_task_definition.ecs-task-definition",
]
}
data "template_file" "ecs_containers_json" {
// How many tasks. There is a 1-1 relationship between tasks and services
count = local.numberOfServices
template = file("${path.module}/container.json.template")
vars = {
// vars removed
image = aws_ecs_task_definition.ecs-task-definition[count.index].family
// This is where I hit the road-block, how do I get the current docker tag from Terraform?
tag = var.ecs_svc_name_2_update == var.ecs_svc_names[count.index]
? var.ecs_svc_image_tag
: data.aws_ecs_task_definition.ecs-task-def[count.index].
}
I didn't post the JSON document, if you need that I can provide it...
Thank you
It is necessary to pass the updated image attribute in the container definition of the task definition revision.
You can data source the container definition of the current task revision which is used by the service and pass it to the terraform. You may follow the code below.
data "template_file" "example" {
template = "${file("${path.module}/example.json")}"
vars {
image = "${data.aws_ecs_container_definition.example.image}"
}
}
resource "aws_ecs_task_definition" "example" {
family = "${var.project_name}-${var.environment_name}-example"
container_definitions = "${data.template_file.example.rendered}"
cpu = 192
memory = 512
}
data "aws_ecs_container_definition" "example" {
task_definition = "${var.project_name}-${var.environment_name}-example"
container_name = "example"
}
I am using terraform 0.12.8 and I am trying to write a resource which would iterate over the following variable structure:
variable "applications" {
type = map(string)
default = {
"app1" = "test,dev,prod"
"app2" = "dev,prod"
}
}
My resource:
resource "aws_iam_user" "custom" {
for_each = var.applications
name = "circleci-${var.tags["ServiceType"]}-user-${var.tags["Environment"]}-${each.key}"
path = "/"
}
So, I can iterate over my map. However, I can't figure out how to verify that var.tags["Environment"] is enabled for specific app e.g. app1.
Basically, I want to ensure that the resource is created for each application as long as the Environment variable is in the list referencing app name in the applications map.
Could someone help me out here?
Please note that I am happy to go with a different variable structure if you have something to propose that would accomplish my goal.
I want to create a VMs in different cloud provider from a single Terraform script e.g. GCP, AWS, Azure using Terraform.
So, I wanted to know that, will Terraform make the VM instances in parallel in all public clouds?
Terraform builds a directed, acyclical graphic (also referred to as a DAG) to understand the dependencies between things. If something isn't dependent on something else then it will execute it in parallel up to the number specified by the -parallelism flag which defaults to 10.
If things are completely separate across multiple providers (you're just creating the same stack in n cloud providers) then it will be comfortably parallel across those stacks.
However, I'd recommend against applying multiple environments/cloud providers at the same time like this because of blast radius issues and in general erring towards minimising how much changes in one operation.
If you have cross provider dependencies then Terraform is great for handling this but it still relies on building that DAG so it can understand your dependencies.
For example you might want to create an instance in GCP and use DNS to resolve the IP address but use AWS' Route53 for all your DNS. For this you could use something like this:
resource "google_compute_instance" "test" {
name = "test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
// Local SSD disk
scratch_disk {
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
metadata = {
foo = "bar"
}
metadata_startup_script = "echo hi > /test.txt"
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
data "aws_route53_zone" "example" {
name = "example.com."
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.example.zone_id}"
name = "www.${data.aws_route53_zone.example.name}"
type = "A"
ttl = "300"
records = ["${google_compute_instance.test.network_interface.0.access_config.0.nat_ip}"]
}
This would build a graph that has the aws_route53_record.www depending on both the data.aws_route53_zone.example data source and also the google_compute_instance.test resource so Terraform knows that both of these need to complete before it can start work on the Route53 record.