Terraform rejecting JSON template_file - terraform

The following ECS task definition is being rejected by Terraform during a plan. JSON validates and using the inline container_definitions works fine.
I've Googled and read some commentary that states TF has an issue with JSON objects, mostly related to nesting. I can get around this by placing the JSON into the container_definition directly in the resource block for the task definition, but I would prefer to stick it in a template file.
Error: Error running plan: 1 error(s) occurred:
* module.sonarqube.aws_ecs_task_definition.task: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field ContainerDefinition.Memory of type int64
JSON Document referenced in template_file:
{
"name": "sonarqube",
"image": "sonarqube:7.5-community",
"memory": "2048",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log-group}",
"awslogs-region": "${region}",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": {
"hostPort": "9000",
"protocol": "tcp",
"containerPort": "9000"
},
"environment": [
{
"name": "sonar.jdbc.password",
"value": "${password}"
},
{
"name": "sonar.jdbc.url",
"value": "${url}/${extra_url}"
},
{
"name": "sonar.jdbc.username",
"value": "${username}"
}
]
}
Relevant TF Blocks:
data "template_file" "task-def" {
template = "${file("${path.module}/task-def.json")}"
vars = {
log-group = "/ecs/${var.cluster_name}-${var.name}"
region = "${var.region}"
url = "jdbc:postgresql://${var.rds_url}${var.extra_url}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_ecs_task_definition" "task" {
family = "${var.name}"
network_mode = "bridge"
cpu = "1024"
memory = "2048"
execution_role_arn = "${var.ecs-exec-role}"
container_definitions = "${data.template_file.task-def.rendered}"
}
```

Terraform expects Json in a bit dirrerent format. After you fix this it will work:
Memory size and port numbers should be integer, not string
Terraform wants "array with oblects", not a JSON "object"
Variable $extra_url was not imported in template_file.task-def
Fixed version of task-def.json, tested on terraform v0.11.13 and provider.aws v2.9.0:
[
{
"name": "sonarqube"
},
{
"image": "sonarqube:7.5-community"
},
{
"memory": 2048
},
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "tyu",
"awslogs-region": "tyu",
"awslogs-stream-prefix": "ecs"
}
}
},
{
"portMappings": [
{
"hostPort": 9000
},
{
"protocol": "tcp"
},
{
"containerPort": 9000
}
]
},
{
"environment": [
{
"name": "sonar.jdbc.password",
"value": "${password}"
},
{
"name": "sonar.jdbc.url",
"value": "${url}/${extra_url}"
},
{
"name": "sonar.jdbc.username",
"value": "${username}"
}
]
}
]
Fixed version of template_file.task-def:
data "template_file" "task-def" {
template = "${file("${path.module}/task-def.json")}"
vars = {
log-group = "/ecs/${var.cluster_name}-${var.name}"
region = "${var.region}"
url = "jdbc:postgresql://${var.rds_url}${var.extra_url}"
username = "${var.username}"
password = "${var.password}"
extra_url = "${var.extra_url}"
}
}

Related

Checkov failing CKV2_AWS_4 "Ensure API Gateway stage have logging level defined as appropriate" even on Checkov example

We are using Terraform to describe an AWS apiGw objects and Checkov to check our plan output. Originally we found we could not get Checkov to pass as it always failed on CKV2_AWS_4 "Ensure API Gateway stage have logging level defined as appropriate".
Since then we have tried using both the Checkov site example and the Terraform example in place of our production apiGw but these fail too. Link to Checkov example:-
https://docs.bridgecrew.io/docs/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate
The notation for the Checkov test failing is:-
metadata:
id: "CKV2_AWS_4"
name: "Ensure API Gateway stage have logging level defined as appropriate"
category: "LOGGING"
definition:
and:
- resource_types:
- aws_api_gateway_stage
connected_resource_types:
- aws_api_gateway_method_settings
operator: exists
cond_type: connection
- or:
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.logging_level"
operator: "equals"
value: "ERROR"
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.logging_level"
operator: "equals"
value: "INFO"
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.metrics_enabled"
operator: "equals"
value: true
- cond_type: filter
attribute: resource_type
value:
- aws_api_gateway_stage
operator: within
Here is our TF which is an expanded version of the Terraform apiGw example:-
data "aws_caller_identity" "current" {}
locals {
# The target account number
account_id = data.aws_caller_identity.current.account_id
# Local variable this is likely to be one of the following: development, nonproduction, production, feature/{name}.
name_suffix = terraform.workspace
}
resource "aws_api_gateway_rest_api" "example" {
body = jsonencode({
openapi = "3.0.1"
info = {
title = "example"
version = "1.0"
}
paths = {
"/path1" = {
get = {
x-amazon-apigateway-integration = {
httpMethod = "GET"
payloadFormatVersion = "1.0"
type = "HTTP_PROXY"
uri = "https://ip-ranges.amazonaws.com/ip-ranges.json"
}
}
}
}
})
name = "example"
}
resource "aws_api_gateway_deployment" "example" {
rest_api_id = aws_api_gateway_rest_api.example.id
triggers = {
redeployment = sha1(jsonencode(aws_api_gateway_rest_api.example.body))
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_stage" "example" {
deployment_id = "${aws_api_gateway_deployment.example.id}"
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "example"
cache_cluster_enabled = true
cache_cluster_size = 6.1
xray_tracing_enabled = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.transfer_apigw_log_group.arn
format = "$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId,$context.extendedRequestId"
}
}
resource "aws_api_gateway_method_settings" "all" {
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "${aws_api_gateway_stage.example.stage_name}"
method_path = "*/*"
settings {
metrics_enabled = true
logging_level = "ERROR"
caching_enabled = true
}
}
resource "aws_api_gateway_method_settings" "path_specific" {
rest_api_id = aws_api_gateway_rest_api.example.id
stage_name = aws_api_gateway_stage.example.stage_name
method_path = "path1/GET"
settings {
metrics_enabled = true
logging_level = "INFO"
caching_enabled = true
}
}
resource "aws_cloudwatch_log_group" "transfer_apigw_log_group" {
name = "transfer_apigw_log_group-${var.region}-${local.name_suffix}"
retention_in_days = 30
kms_key_id = "alias/aws/apigateway"
}
When TF plan runs we get this result which Checkov reads:-
{
"format_version": "1.1",
"terraform_version": "1.2.7",
"planned_values": {
"root_module": {
"child_modules": [
{
"resources": [
{
"address": "module.api_gateway_uk.aws_api_gateway_deployment.example",
"mode": "managed",
"type": "aws_api_gateway_deployment",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"description": null,
"stage_description": null,
"stage_name": null,
"triggers": {
"redeployment": "145be397ea51cabb14595b0f0ace006017953f0a"
},
"variables": null
},
"sensitive_values": {
"triggers": {}
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_method_settings.all",
"mode": "managed",
"type": "aws_api_gateway_method_settings",
"name": "all",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"method_path": "*/*",
"settings": [
{
"caching_enabled": true,
"logging_level": "ERROR",
"metrics_enabled": true,
"throttling_burst_limit": -1,
"throttling_rate_limit": -1
}
],
"stage_name": "example"
},
"sensitive_values": {
"settings": [
{}
]
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_method_settings.path_specific",
"mode": "managed",
"type": "aws_api_gateway_method_settings",
"name": "path_specific",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"method_path": "path1/GET",
"settings": [
{
"caching_enabled": true,
"logging_level": "INFO",
"metrics_enabled": true,
"throttling_burst_limit": -1,
"throttling_rate_limit": -1
}
],
"stage_name": "example"
},
"sensitive_values": {
"settings": [
{}
]
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_rest_api.example",
"mode": "managed",
"type": "aws_api_gateway_rest_api",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"body": "{\"info\":{\"title\":\"example\",\"version\":\"1.0\"},\"openapi\":\"3.0.1\",\"paths\":{\"/path1\":{\"get\":{\"x-amazon-apigateway-integration\":{\"httpMethod\":\"GET\",\"payloadFormatVersion\":\"1.0\",\"type\":\"HTTP_PROXY\",\"uri\":\"https://ip-ranges.amazonaws.com/ip-ranges.json\"}}}}}",
"minimum_compression_size": -1,
"name": "example",
"parameters": null,
"put_rest_api_mode": null,
"tags": null
},
"sensitive_values": {
"binary_media_types": [],
"endpoint_configuration": [],
"tags_all": {}
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_stage.example",
"mode": "managed",
"type": "aws_api_gateway_stage",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"access_log_settings": [
{
"format": "$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId,$context.extendedRequestId"
}
],
"cache_cluster_enabled": true,
"cache_cluster_size": "6.1",
"canary_settings": [],
"client_certificate_id": null,
"description": null,
"documentation_version": null,
"stage_name": "example",
"tags": null,
"variables": null,
"xray_tracing_enabled": true
},
"sensitive_values": {
"access_log_settings": [
{}
],
"canary_settings": [],
"tags_all": {}
}
},
{
"address": "module.api_gateway_uk.aws_cloudwatch_log_group.transfer_apigw_log_group",
"mode": "managed",
"type": "aws_cloudwatch_log_group",
"name": "transfer_apigw_log_group",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"kms_key_id": "alias/aws/apigateway",
"name": "transfer_apigw_log_group-uk-default",
"retention_in_days": 30,
"skip_destroy": false,
"tags": null
},
"sensitive_values": {
"tags_all": {}
}
}
],
"address": "module.api_gateway_uk"
}
<SNIP>
I'm wondering which rule is being broken in the Checkov test? Could it be the 'connection' between objects like the apiGw Stage and the rest api? I am not clear how the tf plan output shows connections between objects but the tf plan passes without any issues.
Thanks in advance.
Jon

Iterate over multiple state file to get a list of strings in terraform

I am new bee and have an issue while retrieving the value from state file. At present what I want is to retrieve the value of vpc_id from multiple state file and create a list of strings out of it so that it can be passed to a resource.
Input
locals.tf:
locals {
aws_regions = toset(["eu-west-1", "eu-central-1", "us-east-2", "us-west-2", "ap-south-1", "ap-southeast-1"])
terraform_state_file = "eu-west-1/terraform.tfstate"
# terragrunt_state_file = "${each.value}/vpc-layout/terragrunt.tfstate"
}
state.tf
data "terraform_remote_state" "plt-network-state" {
backend = "s3"
for_each = toset(local.aws_regions)
config = {
bucket = "tfstate-316899010651"
key = each.value == "eu-west-1" ? local.terraform_state_file : "${each.value}/vpc-layout/terragrunt.tfstate"
region = "eu-west-1"
}
}
Now I want to iterate over the state file received to get the value of vpc_id from the same:
terraform state file:
{
"version": 4,
"terraform_version": "1.0.5",
"serial": 1117,
"lineage": "5a401d1e-ec22-5ae0-5170-aa5b484f89cb",
"outputs": {
"dev_vpc_id": {
"value": "xxx",
"type": "string"
},
"acc_vpc_id": {
"value": "yyy",
"type": "string"
},
terragrunt state file
{
"version": 4,
"terraform_version": "0.13.5",
"serial": 13,
"lineage": "6a1eb7fb-82c5-b70a-c8ec-8734102fafdd",
"outputs": {
"vpcs_all": {
"value": [
{
"environment": "acceptance",
"id": "xxx"
},
{
"environment": "development",
"id": "yyy"
},
{
"environment": "production",
"id": "zzz"
}
]
}
}
}
I want the list of all envs into a list of strings so that it can be passed to below resource:
Something like
locals {
all_vpc_ids = [
data.terraform_remote_state.plt-network-state[each.key].outputs != "" ? data.terraform_remote_state.plt-network-state[each.key].outputs.development_vpc_id : [for v in data.terraform_remote_state.plt-network-state[each.key].outputs.vpcs_all[*] : format("%q", v.id) if v.environment == "development" ],
]
}
This needs to be passed to:
resource "aws_route53_zone" "demo" {
comment = "xyz.com"
name = "xyz.com"
dynamic "vpc" {
for_each = local.all_vpc_ids
content {
vpc_id = vpc.value
}
}
tags = {}
}
Any advice or help is much appreciated !!!
Thanks in advance.

List to string in Terraform for ARM template use

Trying to use a list(string) to string for subscriptions in my tfvars into an TF ARM deployment.
I want the following:
"scope": [
"/subscriptions/0-1",
"/subscriptions/0-2"
]
I've attempted the following with no success:
"scope": [
${join(", ", each.value.subscriptions)}
]
I'm getting the following error
Error: expanding template_content: invalid character '/' looking for beginning of value
dev.tfvars
schedules = {
01 = {
name = "01"
subscriptions = ["/subscriptions/0-1", "/subscriptions/0-2"]
}
02 = {
name = "02"
subscriptions = ["/subscriptions/0-1", "/subscriptions/0-2"]
}
}
variables.tf
variable "schedules" {
type = map(object({
name = string
subscriptions = list(string)
}))
}
updates.tf
resource "azurerm_resource_group_template_deployment" "updates" {
for_each = var.schedules
name = each.key
resource_group_name = var.rg_name
deployment_mode = "Incremental"
debug_level = "requestContent"
template_content = <<TEMPLATE
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"apiVersion": "2017-05-15-preview",
"type": "Microsoft.Automation/automationAccounts/softwareUpdateConfigurations",
"name": "[concat(parameters('automationAccounts_automation_account_name'), '/test-schedule')]",
"properties": {
"updateConfiguration": {
"operatingSystem": "Windows",
"duration": "PT2H",
"windows": {
"excludedKbNumbers": [],
"includedUpdateClassifications": "Critical, Security",
"rebootSetting": "IfRequired"
},
"targets": {
"azureQueries": [
{
"scope": [
${join("\", \"", each.value.subscriptions)}
],
"tagSettings": {
"tags": {
"updates": [
"test"
]
},
"filterOperator": "All"
}
}
]
}
},
"scheduleInfo": {
"isEnabled": "true",
"frequency": "Month",
"interval": "1",
"startTime": "2022-01-18T01:01:00+11:00",
"timeZone": "Australia/Sydney",
"advancedSchedule": {
"monthlyOccurrences": [
{
"occurrence": "Saturday",
"day": "2"
}
]
}
}
}
} ]
}
TEMPLATE
}
You inserted the double quotes between each element, but they should be encasing each element.
${join(", ", [ for sub in each.value.subscriptions: "\"${sub}\"" ])}
Usually you would use jsonencode instead of join in cases like yours:
resource "azurerm_resource_group_template_deployment" "updates" {
for_each = var.schedules
name = each.key
resource_group_name = var.rg_name
deployment_mode = "Incremental"
debug_level = "requestContent"
template_content = <<TEMPLATE
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"apiVersion": "2017-05-15-preview",
"type": "Microsoft.Automation/automationAccounts/softwareUpdateConfigurations",
"name": "[concat(parameters('automationAccounts_automation_account_name'), '/test-schedule')]",
"properties": {
"updateConfiguration": {
"operatingSystem": "Windows",
"duration": "PT2H",
"windows": {
"excludedKbNumbers": [],
"includedUpdateClassifications": "Critical, Security",
"rebootSetting": "IfRequired"
},
"targets": {
"azureQueries": [
{
"scope": ${jsonencode(each.value.subscriptions)},
"tagSettings": {
"tags": {
"updates": [
"test"
]
},
"filterOperator": "All"
}
}
]
}
},
"scheduleInfo": {
"isEnabled": "true",
"frequency": "Month",
"interval": "1",
"startTime": "2022-01-18T01:01:00+11:00",
"timeZone": "Australia/Sydney",
"advancedSchedule": {
"monthlyOccurrences": [
{
"occurrence": "Saturday",
"day": "2"
}
]
}
}
}
} ]
}
TEMPLATE
}

Passing many ENVs to terraform template

Is there a way to pass many ENVs altogether to template file in Terraform without declaring them each separately like this:
data "template_file" "test_template" {
template = file("templates/container.tpl")
vars = {
ENV1 = var.ENV1
ENV2 = var.ENV2
ENV3 = var.ENV3
ENV4 = var.ENV4
ENV5 = var.ENV5
}
}
And inside template having them like this?:
[
{
"essential": true,
"memory": 300,
"name": "client",
"cpu": 300,
"image": "some_image",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 0
}
],
"environment": [
{ "name": "ENV1", "value": "${ENV1}" },
{ "name": "ENV2", "value": "${ENV2}" },
{ "name": "ENV3", "value": "${ENV3}" },
{ "name": "ENV4", "value": "${ENV4}" },
{ "name": "ENV5", "value": "${ENV5}" },
]
}
]
In terraform 0.12, it is recommended you use:
https://www.terraform.io/docs/configuration/functions/templatefile.html
Template:
%{ for addr in ip_addrs ~}
backend ${addr}:${port}
%{ endfor ~}
Call:
templatefile("${path.module}/backends.tmpl", { port = 8080, ip_addrs = ["10.0.0.1", "10.0.0.2"] })

Terraform AWS CloudWatch log group for ECS tasks/containers

I'm trying to create an AWS ECS task with Terraform which will put logs in a specific log group on CloudWatch. The problem is that container definition is in the JSON file and there is no way for me to map the CloudWatch group name from .tf file to that .json file.
container_definition.json:
[
{
"name": "supreme-task",
"image": "xxxx50690yyyy.dkr.ecr.eu-central-1.amazonaws.com/supreme-task",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "supreme-task-group", <- This needs to be taken from variable.tf file.
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "streaming"
}
}
}
]
variable.tf:
variable "ecs_task_definition_name" {
description = "Task definition name."
type = string
default = "supreme-task-def"
}
variable "task_role" {
description = "Name of the task role."
type = string
default = "supreme-task-role"
}
variable "task_execution_role" {
description = "Name of the task execution role."
type = string
default = "supreme-task-exec-role"
}
variable "cloudwatch_group" {
description = "CloudWatch group name."
type = string
default = "supreme-task-group"
}
task definition:
resource "aws_ecs_task_definition" "task_definition" {
family = var.ecs_task_definition_name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory = 4096
container_definitions = file("modules/ecs-supreme-task/task-definition.json")
execution_role_arn = aws_iam_role.task_execution_role.name
task_role_arn = aws_iam_role.task_role.name
}
Is there a way to do that? Or maybe this should be done differently?
Solved by following #ydaetskcorR's comment.
Made container definition as inline parameter.
container_definitions = <<DEFINITION
[
{
"name": "${var.repository_name}",
"image": "${var.repository_uri}",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.cloudwatch_group}",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
DEFINITION
If you want to load the container definition as a template to avoid inlining the content in the tf files, then you could:
1- Create the container definition as a template file with variables, just note that the extension would be .tpl
container_definition.tpl
[
{
"name": "supreme-task",
"image": "xxxx50690yyyy.dkr.ecr.eu-central-1.amazonaws.com/supreme-task",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${cloudwatch_group}",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "streaming"
}
}
}
]
2- Then load the file as a template an inject the variables:
task_definition.tf
data template_file task_definition {
template = file("${path.module}/container_definition.tpl")
vars = {
cloudwatch_group = var.cloudwatch_group
}
}
resource "aws_ecs_task_definition" "task_definition" {
family = var.ecs_task_definition_name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory = 4096
container_definitions = data.template_file.task_definition.rendered
execution_role_arn = aws_iam_role.task_execution_role.name
task_role_arn = aws_iam_role.task_role.name
}

Resources