How to use/reference Terraform output values in BigQuery Schema file - terraform

Using Terraform BigQuery Module to deploy BQ schema. Trying to define policy tags but not sure how to reference newly created Taxonomy and Policy Tag Ids inside my JSON Schema. Below is a dummy extract of how my schema.json is looking with policy tags linked to fields
Problem:
Schema below referencing the ids of Taxonomy and Policy Tag as
${google_data_catalog_taxonomy.my_taxonomy.id}
but when I apply TF it's not replacing values and throwing exception
Error 400: Invalid value for policyTags:
projects/my_project/locations/europe-
west2/taxonomies/${google_data_catalog_taxonomy.my_taxonomy.id}/policyTags/${google_data_catalog_policy_tag.PII.id} is not a valid value. Expected value should follow the format "projects/<projectId>/locations/<locationId>/taxonomies/<taxonomyId>/policyTags/<policyTagId>".
Table_1.json looks like following
{
"fields": [
{
"mode": "NULLABLE",
"name": "Email",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${google_data_catalog_taxonomy.my_taxonomy.id}/policyTags/${google_data_catalog_policy_tag.PII.id}"
]
}
},
{
"mode": "NULLABLE",
"name": "Mobile",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${google_data_catalog_taxonomy.my_taxonomy.id}/policyTags/${google_data_catalog_policy_tag.PII.id}"
]
}
},
}
I am outputting Taxonomy and Policy tags as following. Can Anyone please suggest how this can be referenced in schema.json file.
outputs.tf
output "my_taxonomy" {
value = google_data_catalog_taxonomy.my_taxonomy.id
}
output "PII" {
value = google_data_catalog_policy_tag.PII.id
}
Edit:
I am using TF BigQuery module where my table schema exists in a separate file.
main.tf
module "bigquery" {
source = "terraform-google-modules/bigquery/google"
dataset_id = "my_Dataset"
dataset_name = "my_Dataset"
description = "my_Dataset"
project_id = "my_project_id"
location = "europe-west2"
default_table_expiration_ms = 3600000
tables = [
{
table_id = "table_!",
**schema = "table_1.json",**
time_partitioning = null,
range_partitioning = null,
expiration_time = null,
clustering = null,
labels = {
env = "dev"
}
}
},
]
}

You can import the json template using templatefile( path ,vars ).
Edit your json to include the vars using ${ ... } syntax.
{
"fields": [
{
"mode": "NULLABLE",
"name": "Email",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${my_taxonomy}/policyTags/${PII}"
]
}
},
{
"mode": "NULLABLE",
"name": "Mobile",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${my_taxonomy}/policyTags/${PII}"
]
}
},
}
In your terraform config edit the schema to use the templatefile function
module "bigquery" {
source = "terraform-google-modules/bigquery/google"
dataset_id = "my_Dataset"
dataset_name = "my_Dataset"
description = "my_Dataset"
project_id = "my_project_id"
location = "europe-west2"
default_table_expiration_ms = 3600000
tables = [
{
table_id = "table_!",
schema = templatefile(
"${path.module}/table_1.json",
{
my_taxonomy = "${google_data_catalog_taxonomy.my_taxonomy.id}",
PII = "${google_data_catalog_policy_tag.PII.id}"
}),
time_partitioning = null,
range_partitioning = null,
expiration_time = null,
clustering = null,
labels = {
env = "dev"
}
}
},
]
}

Related

Terraform - create azure cost management view using Rest API call via terraform code

I want to call Rest API in terraform. Below is the Rest API sample request that needs to be used to create a cost analysis view in Azure. We need to deploy this resource as code using terraform.
In order to create the View/s we can use REST API:
https://learn.microsoft.com/en-us/rest/api/cost-management/views/create-or-update?tabs=HTTP
we can use this API to Email subscribe by creating scheduled action:
https://learn.microsoft.com/en-us/rest/api/cost-management/scheduled-actions/create-or-update?tabs=HTTP
For the second API for scheduled action for email subscription, we should use the payload body below as example:
{
"kind": "Email",
"properties": {
"displayName": "Test ",
"status": "Enabled",
"viewId": "/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/providers/Microsoft.CostManagement/views/test",
"schedule": {
"frequency": "Weekly",
"startDate": "2023-01-11T02:30:00.000Z",
"endDate": "2024-01-10T18:30:00.000Z",
"daysOfWeek": [
"Wednesday"
]
},
"notification": {
"to": [
test#microsoft.com
],
"subject": "Test",
"message": "Test"
},
"fileDestination": {
"fileFormats": [
"Csv"
]
},
"scope": "/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}"
}
}
I tried to reproduce the same in my environment .
Tried with the properties mentioned in your json
Code:
main.tf:
resource "azapi_resource" "symbolicname" {
name = "kavyaexample"
parent_id = data.azurerm_resource_group.example.id
type = "Microsoft.CostManagement/views#2019-11-01"
// location = "eastus"
body = jsonencode({
properties = {
displayName = "myfilefmt"
fileDestination = {
fileFormats = "Csv"
}
notification = {
language = "en"
message = "this is test notif"
regionalFormat = "string"
subject = "Test"
to = [
"xx#xxx.com"
]
}
notificationEmail = "string"
schedule = {
dayOfMonth = 19
daysOfWeek = [
"Thursday"
]
endDate = "2024-01-10T18:30:00.000Z"
frequency = "weekly"
//hourOfDay = int
startDate = "2023-01-19T11:30:00.000Z"
weeksOfMonth = [
"string"
]
}
"scope" : "/providers/Microsoft.Billing/billingAccounts/xxxxx"
"status" = "Enabled"
"viewId" : "/providers/Microsoft.Billing/billingAccounts/xxxxe/providers/Microsoft.CostManagement/views/test",
}
kind = "Email"
})
}
Here install AzApi VSCode Extension to use the AzApi provider
Microsoft.CostManagement/views support following format as the document .
resource "azapi_resource" "symbolicname" {
type = "Microsoft.CostManagement/views#2019-11-01"
name = "string"
parent_id = "string"
body = jsonencode({
properties = {
accumulated = "string"
chart = "string"
displayName = "string"
kpis = [
{
enabled = bool
id = "string"
type = "string"
}
]
metric = "string"
pivots = [
{
…..
}
]
query = {
dataSet = {
aggregation = {}
configuration = {
columns = [
"string"
]
}
filter = {
and = [
{
dimensions = {
name = "string"
operator = "string"
values = [
"string"
]
}
or = [
{
tagKey = {
name = "string"
operator = "string"
values = [
"string"
]
}
tags = {
name = "string"
operator = "string"
values = [
"string"
]
}
tagValue = {
name = "string"
operator = "string"
values = [
"string"
]
}
}
granularity = "string"
grouping = [
{
name = "string"
type = "string"
}
]
sorting = [
{
direction = "string"
name = "string"
}
]
}
timeframe = "string"
timePeriod = {
from = "string"
to = "string"
}
type = "Usage"
}
scope = "string"
}
eTag = "string"
})
}
It does not support following parameters.
So received the error:
Error: the `body` is invalid:
│ `properties.fileDestination` is not expected here. Do you mean `properties.metric`?
│ `properties.notification` is not expected here. Do you mean `properties.modifiedOn`?
Microsoft.CostManagement/scheduledActions has these parameters .
azapi resource of Microsoft.CostManagement/scheduledActionsParentId can be tried giving as parentId in view .
Reference : Microsoft.CostManagement/scheduledActions - Bicep, ARM template & Terraform AzAPI reference | Microsoft Learn

Checkov failing CKV2_AWS_4 "Ensure API Gateway stage have logging level defined as appropriate" even on Checkov example

We are using Terraform to describe an AWS apiGw objects and Checkov to check our plan output. Originally we found we could not get Checkov to pass as it always failed on CKV2_AWS_4 "Ensure API Gateway stage have logging level defined as appropriate".
Since then we have tried using both the Checkov site example and the Terraform example in place of our production apiGw but these fail too. Link to Checkov example:-
https://docs.bridgecrew.io/docs/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate
The notation for the Checkov test failing is:-
metadata:
id: "CKV2_AWS_4"
name: "Ensure API Gateway stage have logging level defined as appropriate"
category: "LOGGING"
definition:
and:
- resource_types:
- aws_api_gateway_stage
connected_resource_types:
- aws_api_gateway_method_settings
operator: exists
cond_type: connection
- or:
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.logging_level"
operator: "equals"
value: "ERROR"
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.logging_level"
operator: "equals"
value: "INFO"
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.metrics_enabled"
operator: "equals"
value: true
- cond_type: filter
attribute: resource_type
value:
- aws_api_gateway_stage
operator: within
Here is our TF which is an expanded version of the Terraform apiGw example:-
data "aws_caller_identity" "current" {}
locals {
# The target account number
account_id = data.aws_caller_identity.current.account_id
# Local variable this is likely to be one of the following: development, nonproduction, production, feature/{name}.
name_suffix = terraform.workspace
}
resource "aws_api_gateway_rest_api" "example" {
body = jsonencode({
openapi = "3.0.1"
info = {
title = "example"
version = "1.0"
}
paths = {
"/path1" = {
get = {
x-amazon-apigateway-integration = {
httpMethod = "GET"
payloadFormatVersion = "1.0"
type = "HTTP_PROXY"
uri = "https://ip-ranges.amazonaws.com/ip-ranges.json"
}
}
}
}
})
name = "example"
}
resource "aws_api_gateway_deployment" "example" {
rest_api_id = aws_api_gateway_rest_api.example.id
triggers = {
redeployment = sha1(jsonencode(aws_api_gateway_rest_api.example.body))
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_stage" "example" {
deployment_id = "${aws_api_gateway_deployment.example.id}"
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "example"
cache_cluster_enabled = true
cache_cluster_size = 6.1
xray_tracing_enabled = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.transfer_apigw_log_group.arn
format = "$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId,$context.extendedRequestId"
}
}
resource "aws_api_gateway_method_settings" "all" {
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "${aws_api_gateway_stage.example.stage_name}"
method_path = "*/*"
settings {
metrics_enabled = true
logging_level = "ERROR"
caching_enabled = true
}
}
resource "aws_api_gateway_method_settings" "path_specific" {
rest_api_id = aws_api_gateway_rest_api.example.id
stage_name = aws_api_gateway_stage.example.stage_name
method_path = "path1/GET"
settings {
metrics_enabled = true
logging_level = "INFO"
caching_enabled = true
}
}
resource "aws_cloudwatch_log_group" "transfer_apigw_log_group" {
name = "transfer_apigw_log_group-${var.region}-${local.name_suffix}"
retention_in_days = 30
kms_key_id = "alias/aws/apigateway"
}
When TF plan runs we get this result which Checkov reads:-
{
"format_version": "1.1",
"terraform_version": "1.2.7",
"planned_values": {
"root_module": {
"child_modules": [
{
"resources": [
{
"address": "module.api_gateway_uk.aws_api_gateway_deployment.example",
"mode": "managed",
"type": "aws_api_gateway_deployment",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"description": null,
"stage_description": null,
"stage_name": null,
"triggers": {
"redeployment": "145be397ea51cabb14595b0f0ace006017953f0a"
},
"variables": null
},
"sensitive_values": {
"triggers": {}
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_method_settings.all",
"mode": "managed",
"type": "aws_api_gateway_method_settings",
"name": "all",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"method_path": "*/*",
"settings": [
{
"caching_enabled": true,
"logging_level": "ERROR",
"metrics_enabled": true,
"throttling_burst_limit": -1,
"throttling_rate_limit": -1
}
],
"stage_name": "example"
},
"sensitive_values": {
"settings": [
{}
]
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_method_settings.path_specific",
"mode": "managed",
"type": "aws_api_gateway_method_settings",
"name": "path_specific",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"method_path": "path1/GET",
"settings": [
{
"caching_enabled": true,
"logging_level": "INFO",
"metrics_enabled": true,
"throttling_burst_limit": -1,
"throttling_rate_limit": -1
}
],
"stage_name": "example"
},
"sensitive_values": {
"settings": [
{}
]
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_rest_api.example",
"mode": "managed",
"type": "aws_api_gateway_rest_api",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"body": "{\"info\":{\"title\":\"example\",\"version\":\"1.0\"},\"openapi\":\"3.0.1\",\"paths\":{\"/path1\":{\"get\":{\"x-amazon-apigateway-integration\":{\"httpMethod\":\"GET\",\"payloadFormatVersion\":\"1.0\",\"type\":\"HTTP_PROXY\",\"uri\":\"https://ip-ranges.amazonaws.com/ip-ranges.json\"}}}}}",
"minimum_compression_size": -1,
"name": "example",
"parameters": null,
"put_rest_api_mode": null,
"tags": null
},
"sensitive_values": {
"binary_media_types": [],
"endpoint_configuration": [],
"tags_all": {}
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_stage.example",
"mode": "managed",
"type": "aws_api_gateway_stage",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"access_log_settings": [
{
"format": "$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId,$context.extendedRequestId"
}
],
"cache_cluster_enabled": true,
"cache_cluster_size": "6.1",
"canary_settings": [],
"client_certificate_id": null,
"description": null,
"documentation_version": null,
"stage_name": "example",
"tags": null,
"variables": null,
"xray_tracing_enabled": true
},
"sensitive_values": {
"access_log_settings": [
{}
],
"canary_settings": [],
"tags_all": {}
}
},
{
"address": "module.api_gateway_uk.aws_cloudwatch_log_group.transfer_apigw_log_group",
"mode": "managed",
"type": "aws_cloudwatch_log_group",
"name": "transfer_apigw_log_group",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"kms_key_id": "alias/aws/apigateway",
"name": "transfer_apigw_log_group-uk-default",
"retention_in_days": 30,
"skip_destroy": false,
"tags": null
},
"sensitive_values": {
"tags_all": {}
}
}
],
"address": "module.api_gateway_uk"
}
<SNIP>
I'm wondering which rule is being broken in the Checkov test? Could it be the 'connection' between objects like the apiGw Stage and the rest api? I am not clear how the tf plan output shows connections between objects but the tf plan passes without any issues.
Thanks in advance.
Jon

Iterate over multiple state file to get a list of strings in terraform

I am new bee and have an issue while retrieving the value from state file. At present what I want is to retrieve the value of vpc_id from multiple state file and create a list of strings out of it so that it can be passed to a resource.
Input
locals.tf:
locals {
aws_regions = toset(["eu-west-1", "eu-central-1", "us-east-2", "us-west-2", "ap-south-1", "ap-southeast-1"])
terraform_state_file = "eu-west-1/terraform.tfstate"
# terragrunt_state_file = "${each.value}/vpc-layout/terragrunt.tfstate"
}
state.tf
data "terraform_remote_state" "plt-network-state" {
backend = "s3"
for_each = toset(local.aws_regions)
config = {
bucket = "tfstate-316899010651"
key = each.value == "eu-west-1" ? local.terraform_state_file : "${each.value}/vpc-layout/terragrunt.tfstate"
region = "eu-west-1"
}
}
Now I want to iterate over the state file received to get the value of vpc_id from the same:
terraform state file:
{
"version": 4,
"terraform_version": "1.0.5",
"serial": 1117,
"lineage": "5a401d1e-ec22-5ae0-5170-aa5b484f89cb",
"outputs": {
"dev_vpc_id": {
"value": "xxx",
"type": "string"
},
"acc_vpc_id": {
"value": "yyy",
"type": "string"
},
terragrunt state file
{
"version": 4,
"terraform_version": "0.13.5",
"serial": 13,
"lineage": "6a1eb7fb-82c5-b70a-c8ec-8734102fafdd",
"outputs": {
"vpcs_all": {
"value": [
{
"environment": "acceptance",
"id": "xxx"
},
{
"environment": "development",
"id": "yyy"
},
{
"environment": "production",
"id": "zzz"
}
]
}
}
}
I want the list of all envs into a list of strings so that it can be passed to below resource:
Something like
locals {
all_vpc_ids = [
data.terraform_remote_state.plt-network-state[each.key].outputs != "" ? data.terraform_remote_state.plt-network-state[each.key].outputs.development_vpc_id : [for v in data.terraform_remote_state.plt-network-state[each.key].outputs.vpcs_all[*] : format("%q", v.id) if v.environment == "development" ],
]
}
This needs to be passed to:
resource "aws_route53_zone" "demo" {
comment = "xyz.com"
name = "xyz.com"
dynamic "vpc" {
for_each = local.all_vpc_ids
content {
vpc_id = vpc.value
}
}
tags = {}
}
Any advice or help is much appreciated !!!
Thanks in advance.

How can I use azurerm_resource_group_template_deployment for azure budget resource but ignore changes in start and end date?

Maybe related: azurerm_resource_group_template_deployment ignoring parameter file
I would like to use the resource azurerm_resource_group_template_deployment from Terraform version 0.37. But there is the problem that Terraform wants to reapply the resource every month, so I thought I could tell to ignore changes to start date and end date, but this would (opposite to the deprecated resource azurerm_template_deployment) need a compute operation, namely jsondecode, which is not allowed. I.e. the following code would not work.
terraform {
required_version = "~> 0.13.0"
required_providers {
azurerm = "~> 2.37.0"
}
}
provider azurerm {
features {}
}
locals {
budget_start_date = formatdate("YYYY-MM-01", timestamp())
budget_end_date = formatdate("YYYY-MM-01", timeadd(timestamp(), "17568h"))
budget_params = jsonencode({
"budgetName" = "budgettest",
"amount" = "4000",
"timeGrain" = "Annually",
"startDate" = local.budget_start_date,
"endDate" = local.budget_end_date,
"firstThreshold" = "75",
"secondThreshold" = "100",
"thirdThreshold" = "50",
"contactGroups" = ""
})
}
resource "azurerm_resource_group" "rg" {
# A subscription cannot have more than 980 resource groups:
# https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits
name = "example-rg"
location = "westeurope"
}
resource "azurerm_resource_group_template_deployment" "dsw_budget" {
name = "test-budget-template"
resource_group_name = azurerm_resource_group.rg[0].name
deployment_mode = "Incremental"
template_content = file("${path.module}/arm/budget_deploy.json")
parameters_content = local.budget_params
lifecycle {
ignore_changes = [
jsondecode(parameters_content)["startDate"],
jsondecode(parameters_content)["endDate"]
]
}
}
For the sake of completeness, content of budget_deploy.json:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"budgetName": {
"type": "string",
"defaultValue": "MyBudget"
},
"amount": {
"type": "string",
"defaultValue": "1000"
},
"timeGrain": {
"type": "string",
"defaultValue": "Monthly",
"allowedValues": [
"Monthly",
"Quarterly",
"Annually"
]
},
"startDate": {
"type": "string"
},
"endDate": {
"type": "string"
},
"firstThreshold": {
"type": "string",
"defaultValue": "90"
},
"secondThreshold": {
"type": "string",
"defaultValue": "110"
},
"thirdThreshold": {
"type": "string",
"defaultValue": "80"
},
"contactEmails": {
"type": "string",
"defaultValue": ""
},
"contactGroups": {
"type": "string",
"defaultValue": ""
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]"
}
},
"variables": {
"groups": "[split(parameters('contactGroups'),',')]"
},
"resources": [
{
"name": "[parameters('budgetName')]",
"type": "Microsoft.Consumption/budgets",
"location": "[parameters('location')]",
"apiVersion": "2019-10-01",
"properties": {
"timePeriod": {
"startDate": "[parameters('startDate')]",
"endDate": "[parameters('endDate')]"
},
"timeGrain": "[parameters('timeGrain')]",
"amount": "[parameters('amount')]",
"category": "Cost",
"notifications": {
"NotificationForExceededBudget1": {
"enabled": true,
"operator": "GreaterThan",
"threshold": "[parameters('firstThreshold')]",
"contactGroups": "[variables('groups')]"
},
"NotificationForExceededBudget2": {
"enabled": true,
"operator": "GreaterThan",
"threshold": "[parameters('secondThreshold')]",
"contactGroups": "[variables('groups')]"
},
"NotificationForExceededBudget3": {
"enabled": true,
"operator": "GreaterThan",
"threshold": "[parameters('thirdThreshold')]",
"contactGroups": "[variables('groups')]"
}
}
}
}
]
}
Is there any way that I can still achieve my goal? - thank you!
I don't think it's right the way you use the ignore_changes. Take a look at the ignore_changes in lifecycle for every resource. It should the property of the resource you want to create, not the value. In addition, if you want to change the resources via the Azure Template in Terraform, it's better to use the Incremental deployment_mode, and do not change the property that you want to ignore the changes.
I resorted to use tags for the end and start date for the budget. The ignore_changes would work for the deprecated azurerm_template_deployment as parameters is of type map in that case and not of json type, like so:
terraform {
required_version = "~> 0.13.0"
required_providers {
azurerm = "~> 2.37.0"
}
}
provider azurerm {
features {}
}
locals {
budget_start_date = formatdate("YYYY-MM-01", timestamp())
budget_end_date = formatdate("YYYY-MM-01", timeadd(timestamp(), "17568h"))
budget_params = {
"budgetName" = "budgettest",
"amount" = "4000",
"timeGrain" = "Annually",
"startDate" = local.budget_start_date,
"endDate" = local.budget_end_date,
"firstThreshold" = "75",
"secondThreshold" = "100",
"thirdThreshold" = "50",
"contactGroups" = ""
}
}
resource "azurerm_resource_group" "rg" {
# A subscription cannot have more than 980 resource groups:
# https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits
name = "example-rg"
location = "westeurope"
}
resource "azurerm_template_deployment" "dsw_budget" {
name = "test-budget-template"
resource_group_name = azurerm_resource_group.rg[0].name
deployment_mode = "Incremental"
template_content = file("${path.module}/arm/budget_deploy.json")
parameters_content = local.budget_params
lifecycle {
ignore_changes = [
parameters["startDate"],
parameters["endDate"]
]
}
}
Now this is not possible anymore with azurerm_resource_group_template_deployment, as json content is has to passed and therefore in ignore_changes a json-decoding which is a computation operation would have to be made, which is not allowed.
Therefore to solve my problem of fixating start and end dates, I resorted to using tags for start and end date and a data source querying them:
terraform {
required_version = "~> 0.13.0"
required_providers {
azurerm = "~> 2.37.0"
}
}
provider azurerm {
features {
template_deployment {
delete_nested_items_during_deletion = false
}
}
}
data "azurerm_resources" "aml" {
resource_group_name = "${var.tk_name_id}-${local.stage}-rg"
type = "Microsoft.MachineLearningServices/workspaces"
}
locals {
budget_start_date_tag = try(element(data.azurerm_resources.aml.resources[*].tags.budget_start_date, 0), "NA")
budget_end_date_tag = try(element(data.azurerm_resources.aml.resources[*].tags.budget_end_date, 0), "NA")
should_test_budget = local.is_test_stage_boolean && var.test_budget
budget_start_date = local.budget_start_date_tag != "NA" ? local.budget_start_date_tag : (local.should_test_budget ? "START DATE FAIL!" : formatdate("YYYY-MM-01", timestamp()))
budget_end_date = local.budget_end_date_tag != "NA" ? local.budget_end_date_tag : (local.should_test_budget ? "END DATE FAIL!" : formatdate("YYYY-MM-01", timeadd(timestamp(), "17568h")))
budget_date_tags = {
"budget_start_date" : local.budget_start_date,
"budget_end_date" : local.budget_end_date
}
}
#--------------------------------------------------------------------------------------------------------------------
# DSW: Resource Group
# --------------------------------------------------------------------------------------------------------------------
resource "azurerm_resource_group" "rg" {
# A subscription cannot have more than 980 resource groups:
# https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits
count = local.no_addresses_available_boolean ? 0 : 1
name = "test-rg"
location = var.location
tags = local.budget_date_tags
}
resource "azurerm_machine_learning_workspace" "aml_workspace" {
name = local.aml_ws_name
resource_group_name = azurerm_resource_group.rg[0].name
location = azurerm_resource_group.rg[0].location
application_insights_id = azurerm_application_insights.aml_insights.id
key_vault_id = azurerm_key_vault.aml_kv.id
storage_account_id = azurerm_storage_account.aml_st.id
container_registry_id = azurerm_container_registry.aml_acr.id
sku_name = "Basic"
tags = merge(var.azure_tags, local.budget_date_tags)
identity {
type = "SystemAssigned"
}
}
#Charles Xu I did not quite test it yet and I am also not sure if this is the best solution?
EDIT: Now I actually run into cyclic dependency because the data source does obviously not exist before resource group is created: https://github.com/hashicorp/terraform/issues/16380.

Terraform rejecting JSON template_file

The following ECS task definition is being rejected by Terraform during a plan. JSON validates and using the inline container_definitions works fine.
I've Googled and read some commentary that states TF has an issue with JSON objects, mostly related to nesting. I can get around this by placing the JSON into the container_definition directly in the resource block for the task definition, but I would prefer to stick it in a template file.
Error: Error running plan: 1 error(s) occurred:
* module.sonarqube.aws_ecs_task_definition.task: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field ContainerDefinition.Memory of type int64
JSON Document referenced in template_file:
{
"name": "sonarqube",
"image": "sonarqube:7.5-community",
"memory": "2048",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log-group}",
"awslogs-region": "${region}",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": {
"hostPort": "9000",
"protocol": "tcp",
"containerPort": "9000"
},
"environment": [
{
"name": "sonar.jdbc.password",
"value": "${password}"
},
{
"name": "sonar.jdbc.url",
"value": "${url}/${extra_url}"
},
{
"name": "sonar.jdbc.username",
"value": "${username}"
}
]
}
Relevant TF Blocks:
data "template_file" "task-def" {
template = "${file("${path.module}/task-def.json")}"
vars = {
log-group = "/ecs/${var.cluster_name}-${var.name}"
region = "${var.region}"
url = "jdbc:postgresql://${var.rds_url}${var.extra_url}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_ecs_task_definition" "task" {
family = "${var.name}"
network_mode = "bridge"
cpu = "1024"
memory = "2048"
execution_role_arn = "${var.ecs-exec-role}"
container_definitions = "${data.template_file.task-def.rendered}"
}
```
Terraform expects Json in a bit dirrerent format. After you fix this it will work:
Memory size and port numbers should be integer, not string
Terraform wants "array with oblects", not a JSON "object"
Variable $extra_url was not imported in template_file.task-def
Fixed version of task-def.json, tested on terraform v0.11.13 and provider.aws v2.9.0:
[
{
"name": "sonarqube"
},
{
"image": "sonarqube:7.5-community"
},
{
"memory": 2048
},
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "tyu",
"awslogs-region": "tyu",
"awslogs-stream-prefix": "ecs"
}
}
},
{
"portMappings": [
{
"hostPort": 9000
},
{
"protocol": "tcp"
},
{
"containerPort": 9000
}
]
},
{
"environment": [
{
"name": "sonar.jdbc.password",
"value": "${password}"
},
{
"name": "sonar.jdbc.url",
"value": "${url}/${extra_url}"
},
{
"name": "sonar.jdbc.username",
"value": "${username}"
}
]
}
]
Fixed version of template_file.task-def:
data "template_file" "task-def" {
template = "${file("${path.module}/task-def.json")}"
vars = {
log-group = "/ecs/${var.cluster_name}-${var.name}"
region = "${var.region}"
url = "jdbc:postgresql://${var.rds_url}${var.extra_url}"
username = "${var.username}"
password = "${var.password}"
extra_url = "${var.extra_url}"
}
}

Resources