Terraform AWS CloudWatch log group for ECS tasks/containers - terraform

I'm trying to create an AWS ECS task with Terraform which will put logs in a specific log group on CloudWatch. The problem is that container definition is in the JSON file and there is no way for me to map the CloudWatch group name from .tf file to that .json file.
container_definition.json:
[
{
"name": "supreme-task",
"image": "xxxx50690yyyy.dkr.ecr.eu-central-1.amazonaws.com/supreme-task",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "supreme-task-group", <- This needs to be taken from variable.tf file.
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "streaming"
}
}
}
]
variable.tf:
variable "ecs_task_definition_name" {
description = "Task definition name."
type = string
default = "supreme-task-def"
}
variable "task_role" {
description = "Name of the task role."
type = string
default = "supreme-task-role"
}
variable "task_execution_role" {
description = "Name of the task execution role."
type = string
default = "supreme-task-exec-role"
}
variable "cloudwatch_group" {
description = "CloudWatch group name."
type = string
default = "supreme-task-group"
}
task definition:
resource "aws_ecs_task_definition" "task_definition" {
family = var.ecs_task_definition_name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory = 4096
container_definitions = file("modules/ecs-supreme-task/task-definition.json")
execution_role_arn = aws_iam_role.task_execution_role.name
task_role_arn = aws_iam_role.task_role.name
}
Is there a way to do that? Or maybe this should be done differently?

Solved by following #ydaetskcorR's comment.
Made container definition as inline parameter.
container_definitions = <<DEFINITION
[
{
"name": "${var.repository_name}",
"image": "${var.repository_uri}",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.cloudwatch_group}",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
DEFINITION

If you want to load the container definition as a template to avoid inlining the content in the tf files, then you could:
1- Create the container definition as a template file with variables, just note that the extension would be .tpl
container_definition.tpl
[
{
"name": "supreme-task",
"image": "xxxx50690yyyy.dkr.ecr.eu-central-1.amazonaws.com/supreme-task",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${cloudwatch_group}",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "streaming"
}
}
}
]
2- Then load the file as a template an inject the variables:
task_definition.tf
data template_file task_definition {
template = file("${path.module}/container_definition.tpl")
vars = {
cloudwatch_group = var.cloudwatch_group
}
}
resource "aws_ecs_task_definition" "task_definition" {
family = var.ecs_task_definition_name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory = 4096
container_definitions = data.template_file.task_definition.rendered
execution_role_arn = aws_iam_role.task_execution_role.name
task_role_arn = aws_iam_role.task_role.name
}

Related

Checkov failing CKV2_AWS_4 "Ensure API Gateway stage have logging level defined as appropriate" even on Checkov example

We are using Terraform to describe an AWS apiGw objects and Checkov to check our plan output. Originally we found we could not get Checkov to pass as it always failed on CKV2_AWS_4 "Ensure API Gateway stage have logging level defined as appropriate".
Since then we have tried using both the Checkov site example and the Terraform example in place of our production apiGw but these fail too. Link to Checkov example:-
https://docs.bridgecrew.io/docs/ensure-api-gateway-stage-have-logging-level-defined-as-appropiate
The notation for the Checkov test failing is:-
metadata:
id: "CKV2_AWS_4"
name: "Ensure API Gateway stage have logging level defined as appropriate"
category: "LOGGING"
definition:
and:
- resource_types:
- aws_api_gateway_stage
connected_resource_types:
- aws_api_gateway_method_settings
operator: exists
cond_type: connection
- or:
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.logging_level"
operator: "equals"
value: "ERROR"
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.logging_level"
operator: "equals"
value: "INFO"
- cond_type: "attribute"
resource_types:
- "aws_api_gateway_method_settings"
attribute: "settings.metrics_enabled"
operator: "equals"
value: true
- cond_type: filter
attribute: resource_type
value:
- aws_api_gateway_stage
operator: within
Here is our TF which is an expanded version of the Terraform apiGw example:-
data "aws_caller_identity" "current" {}
locals {
# The target account number
account_id = data.aws_caller_identity.current.account_id
# Local variable this is likely to be one of the following: development, nonproduction, production, feature/{name}.
name_suffix = terraform.workspace
}
resource "aws_api_gateway_rest_api" "example" {
body = jsonencode({
openapi = "3.0.1"
info = {
title = "example"
version = "1.0"
}
paths = {
"/path1" = {
get = {
x-amazon-apigateway-integration = {
httpMethod = "GET"
payloadFormatVersion = "1.0"
type = "HTTP_PROXY"
uri = "https://ip-ranges.amazonaws.com/ip-ranges.json"
}
}
}
}
})
name = "example"
}
resource "aws_api_gateway_deployment" "example" {
rest_api_id = aws_api_gateway_rest_api.example.id
triggers = {
redeployment = sha1(jsonencode(aws_api_gateway_rest_api.example.body))
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_stage" "example" {
deployment_id = "${aws_api_gateway_deployment.example.id}"
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "example"
cache_cluster_enabled = true
cache_cluster_size = 6.1
xray_tracing_enabled = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.transfer_apigw_log_group.arn
format = "$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId,$context.extendedRequestId"
}
}
resource "aws_api_gateway_method_settings" "all" {
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "${aws_api_gateway_stage.example.stage_name}"
method_path = "*/*"
settings {
metrics_enabled = true
logging_level = "ERROR"
caching_enabled = true
}
}
resource "aws_api_gateway_method_settings" "path_specific" {
rest_api_id = aws_api_gateway_rest_api.example.id
stage_name = aws_api_gateway_stage.example.stage_name
method_path = "path1/GET"
settings {
metrics_enabled = true
logging_level = "INFO"
caching_enabled = true
}
}
resource "aws_cloudwatch_log_group" "transfer_apigw_log_group" {
name = "transfer_apigw_log_group-${var.region}-${local.name_suffix}"
retention_in_days = 30
kms_key_id = "alias/aws/apigateway"
}
When TF plan runs we get this result which Checkov reads:-
{
"format_version": "1.1",
"terraform_version": "1.2.7",
"planned_values": {
"root_module": {
"child_modules": [
{
"resources": [
{
"address": "module.api_gateway_uk.aws_api_gateway_deployment.example",
"mode": "managed",
"type": "aws_api_gateway_deployment",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"description": null,
"stage_description": null,
"stage_name": null,
"triggers": {
"redeployment": "145be397ea51cabb14595b0f0ace006017953f0a"
},
"variables": null
},
"sensitive_values": {
"triggers": {}
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_method_settings.all",
"mode": "managed",
"type": "aws_api_gateway_method_settings",
"name": "all",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"method_path": "*/*",
"settings": [
{
"caching_enabled": true,
"logging_level": "ERROR",
"metrics_enabled": true,
"throttling_burst_limit": -1,
"throttling_rate_limit": -1
}
],
"stage_name": "example"
},
"sensitive_values": {
"settings": [
{}
]
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_method_settings.path_specific",
"mode": "managed",
"type": "aws_api_gateway_method_settings",
"name": "path_specific",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"method_path": "path1/GET",
"settings": [
{
"caching_enabled": true,
"logging_level": "INFO",
"metrics_enabled": true,
"throttling_burst_limit": -1,
"throttling_rate_limit": -1
}
],
"stage_name": "example"
},
"sensitive_values": {
"settings": [
{}
]
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_rest_api.example",
"mode": "managed",
"type": "aws_api_gateway_rest_api",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"body": "{\"info\":{\"title\":\"example\",\"version\":\"1.0\"},\"openapi\":\"3.0.1\",\"paths\":{\"/path1\":{\"get\":{\"x-amazon-apigateway-integration\":{\"httpMethod\":\"GET\",\"payloadFormatVersion\":\"1.0\",\"type\":\"HTTP_PROXY\",\"uri\":\"https://ip-ranges.amazonaws.com/ip-ranges.json\"}}}}}",
"minimum_compression_size": -1,
"name": "example",
"parameters": null,
"put_rest_api_mode": null,
"tags": null
},
"sensitive_values": {
"binary_media_types": [],
"endpoint_configuration": [],
"tags_all": {}
}
},
{
"address": "module.api_gateway_uk.aws_api_gateway_stage.example",
"mode": "managed",
"type": "aws_api_gateway_stage",
"name": "example",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"access_log_settings": [
{
"format": "$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId,$context.extendedRequestId"
}
],
"cache_cluster_enabled": true,
"cache_cluster_size": "6.1",
"canary_settings": [],
"client_certificate_id": null,
"description": null,
"documentation_version": null,
"stage_name": "example",
"tags": null,
"variables": null,
"xray_tracing_enabled": true
},
"sensitive_values": {
"access_log_settings": [
{}
],
"canary_settings": [],
"tags_all": {}
}
},
{
"address": "module.api_gateway_uk.aws_cloudwatch_log_group.transfer_apigw_log_group",
"mode": "managed",
"type": "aws_cloudwatch_log_group",
"name": "transfer_apigw_log_group",
"provider_name": "registry.terraform.io/hashicorp/aws",
"schema_version": 0,
"values": {
"kms_key_id": "alias/aws/apigateway",
"name": "transfer_apigw_log_group-uk-default",
"retention_in_days": 30,
"skip_destroy": false,
"tags": null
},
"sensitive_values": {
"tags_all": {}
}
}
],
"address": "module.api_gateway_uk"
}
<SNIP>
I'm wondering which rule is being broken in the Checkov test? Could it be the 'connection' between objects like the apiGw Stage and the rest api? I am not clear how the tf plan output shows connections between objects but the tf plan passes without any issues.
Thanks in advance.
Jon

Iterate over multiple state file to get a list of strings in terraform

I am new bee and have an issue while retrieving the value from state file. At present what I want is to retrieve the value of vpc_id from multiple state file and create a list of strings out of it so that it can be passed to a resource.
Input
locals.tf:
locals {
aws_regions = toset(["eu-west-1", "eu-central-1", "us-east-2", "us-west-2", "ap-south-1", "ap-southeast-1"])
terraform_state_file = "eu-west-1/terraform.tfstate"
# terragrunt_state_file = "${each.value}/vpc-layout/terragrunt.tfstate"
}
state.tf
data "terraform_remote_state" "plt-network-state" {
backend = "s3"
for_each = toset(local.aws_regions)
config = {
bucket = "tfstate-316899010651"
key = each.value == "eu-west-1" ? local.terraform_state_file : "${each.value}/vpc-layout/terragrunt.tfstate"
region = "eu-west-1"
}
}
Now I want to iterate over the state file received to get the value of vpc_id from the same:
terraform state file:
{
"version": 4,
"terraform_version": "1.0.5",
"serial": 1117,
"lineage": "5a401d1e-ec22-5ae0-5170-aa5b484f89cb",
"outputs": {
"dev_vpc_id": {
"value": "xxx",
"type": "string"
},
"acc_vpc_id": {
"value": "yyy",
"type": "string"
},
terragrunt state file
{
"version": 4,
"terraform_version": "0.13.5",
"serial": 13,
"lineage": "6a1eb7fb-82c5-b70a-c8ec-8734102fafdd",
"outputs": {
"vpcs_all": {
"value": [
{
"environment": "acceptance",
"id": "xxx"
},
{
"environment": "development",
"id": "yyy"
},
{
"environment": "production",
"id": "zzz"
}
]
}
}
}
I want the list of all envs into a list of strings so that it can be passed to below resource:
Something like
locals {
all_vpc_ids = [
data.terraform_remote_state.plt-network-state[each.key].outputs != "" ? data.terraform_remote_state.plt-network-state[each.key].outputs.development_vpc_id : [for v in data.terraform_remote_state.plt-network-state[each.key].outputs.vpcs_all[*] : format("%q", v.id) if v.environment == "development" ],
]
}
This needs to be passed to:
resource "aws_route53_zone" "demo" {
comment = "xyz.com"
name = "xyz.com"
dynamic "vpc" {
for_each = local.all_vpc_ids
content {
vpc_id = vpc.value
}
}
tags = {}
}
Any advice or help is much appreciated !!!
Thanks in advance.

How to use/reference Terraform output values in BigQuery Schema file

Using Terraform BigQuery Module to deploy BQ schema. Trying to define policy tags but not sure how to reference newly created Taxonomy and Policy Tag Ids inside my JSON Schema. Below is a dummy extract of how my schema.json is looking with policy tags linked to fields
Problem:
Schema below referencing the ids of Taxonomy and Policy Tag as
${google_data_catalog_taxonomy.my_taxonomy.id}
but when I apply TF it's not replacing values and throwing exception
Error 400: Invalid value for policyTags:
projects/my_project/locations/europe-
west2/taxonomies/${google_data_catalog_taxonomy.my_taxonomy.id}/policyTags/${google_data_catalog_policy_tag.PII.id} is not a valid value. Expected value should follow the format "projects/<projectId>/locations/<locationId>/taxonomies/<taxonomyId>/policyTags/<policyTagId>".
Table_1.json looks like following
{
"fields": [
{
"mode": "NULLABLE",
"name": "Email",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${google_data_catalog_taxonomy.my_taxonomy.id}/policyTags/${google_data_catalog_policy_tag.PII.id}"
]
}
},
{
"mode": "NULLABLE",
"name": "Mobile",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${google_data_catalog_taxonomy.my_taxonomy.id}/policyTags/${google_data_catalog_policy_tag.PII.id}"
]
}
},
}
I am outputting Taxonomy and Policy tags as following. Can Anyone please suggest how this can be referenced in schema.json file.
outputs.tf
output "my_taxonomy" {
value = google_data_catalog_taxonomy.my_taxonomy.id
}
output "PII" {
value = google_data_catalog_policy_tag.PII.id
}
Edit:
I am using TF BigQuery module where my table schema exists in a separate file.
main.tf
module "bigquery" {
source = "terraform-google-modules/bigquery/google"
dataset_id = "my_Dataset"
dataset_name = "my_Dataset"
description = "my_Dataset"
project_id = "my_project_id"
location = "europe-west2"
default_table_expiration_ms = 3600000
tables = [
{
table_id = "table_!",
**schema = "table_1.json",**
time_partitioning = null,
range_partitioning = null,
expiration_time = null,
clustering = null,
labels = {
env = "dev"
}
}
},
]
}
You can import the json template using templatefile( path ,vars ).
Edit your json to include the vars using ${ ... } syntax.
{
"fields": [
{
"mode": "NULLABLE",
"name": "Email",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${my_taxonomy}/policyTags/${PII}"
]
}
},
{
"mode": "NULLABLE",
"name": "Mobile",
"type": "STRING",
"policyTags":{
"names": [
"projects/my_project/locations/europe-west2/taxonomies/${my_taxonomy}/policyTags/${PII}"
]
}
},
}
In your terraform config edit the schema to use the templatefile function
module "bigquery" {
source = "terraform-google-modules/bigquery/google"
dataset_id = "my_Dataset"
dataset_name = "my_Dataset"
description = "my_Dataset"
project_id = "my_project_id"
location = "europe-west2"
default_table_expiration_ms = 3600000
tables = [
{
table_id = "table_!",
schema = templatefile(
"${path.module}/table_1.json",
{
my_taxonomy = "${google_data_catalog_taxonomy.my_taxonomy.id}",
PII = "${google_data_catalog_policy_tag.PII.id}"
}),
time_partitioning = null,
range_partitioning = null,
expiration_time = null,
clustering = null,
labels = {
env = "dev"
}
}
},
]
}

Task role defined by Terraform not working correctly for ECS scheduled task

Our team has a bunch of cron jobs running as an ECS scheduled task. Lately I'm adding a new job that requires the use of dynamodb, so I added the permissions in our terraform files, but keep on getting permission failure:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException:
User: arn:aws:sts::87********23:assumed-role/tcoe-tableau/74a408106bf543ee95dbe4841d00b0f7 is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:us-east-1:87********23:table/tcoe-candyjar-metrics (Service: AmazonDynamoDBv2;
Status Code: 400; Error Code: AccessDeniedException; Request ID: H52U8GCS1JAB74OJ6VSSEFLCQNVV4KQNSO5AEMVJF66Q9ASUAAJG; Proxy: null)
My related terraform are as follows:
First, here are the ecs cluster and task definition:
resource "aws_ecs_cluster" "ecs-cluster" {
name = "${var.stack_id}"
tags {
StackId = "${var.stack_id}"
}
lifecycle {
ignore_changes = [
"tags"
]
}
}
resource "aws_ecs_task_definition" "task-definition" {
family = "${var.stack_id}"
network_mode = "awsvpc"
requires_compatibilities = [
"FARGATE"
]
cpu = "${var.cpu}"
memory = "${var.task_memory}"
task_role_arn = "${aws_iam_role.task_role.arn}"
execution_role_arn = "${aws_iam_role.ecs_task_execution_role.arn}"
container_definitions = <<EOF
[
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.log_group}",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "${var.stack_id}"
}
},
"ulimits": [
{
"name": "nofile",
"softLimit": 4096,
"hardLimit": 8192
}
],
"image": "${var.ecr_account}.dkr.ecr.us-east-1.amazonaws.com/${var.ecr_namespace}/${var.stack_id}:latest",
"environment": [
{"name": "ENV", "value": "${var.environment}" }
],
"essential": true,
"privileged": false,
"name": "${var.stack_id}",
"memory": ${var.memory}
}
]
EOF
tags {
StackId = "${var.stack_id}"
}
}
Then here's the task role for the task definition:
resource "aws_iam_role" "task_role" {
name = "${var.stack_id}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
${data.aws_caller_identity.current.account_id == var.dev_account ? "\"AWS\": [\"arn:aws:iam::61********19:role/${var.dev_role_name}\"]," : ""}
"Service": ["ecs-tasks.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_instance_profile" "task_role_profile" {
name = "${var.stack_id}"
role = "${aws_iam_role.task_role.name}"
}
Finally here I'm adding the dynamodb-related policy to the task role:
resource "aws_iam_role_policy" "main" {
name = "${var.stack_id}-extra-policy"
role = "${aws_iam_role.task_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:List*",
"dynamodb:Get*",
"dynamodb:Describe*",
"dynamodb:DeleteItem",
"dynamodb:Put*",
"dynamodb:UpdateItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:87********23:table/tcoe-candyjar-metrics",
"arn:aws:dynamodb:us-east-1:87********23:table/tcoe-candyjar-metrics/index/*"
]
}
]
}
EOF
}
Am I doing something wrong here or missing anything?
I thought my failure was due to using role.id instead of role.name, and I wanted to figure out the differences between id and name, so I posted this question aws iam role id vs role name in terraform, when to use which?, then the answer/comment indicated that that are exactly the same, which prompted me to go back and carefully check my commit history and build history, and I realized that the reason role.id didn't work was due to some human error I made. My new codes worked not because I used role.name, but because i unknowingly fixed the other error at the same time.
To summarize, role.id and role.name are exactly the same.

Terraform rejecting JSON template_file

The following ECS task definition is being rejected by Terraform during a plan. JSON validates and using the inline container_definitions works fine.
I've Googled and read some commentary that states TF has an issue with JSON objects, mostly related to nesting. I can get around this by placing the JSON into the container_definition directly in the resource block for the task definition, but I would prefer to stick it in a template file.
Error: Error running plan: 1 error(s) occurred:
* module.sonarqube.aws_ecs_task_definition.task: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field ContainerDefinition.Memory of type int64
JSON Document referenced in template_file:
{
"name": "sonarqube",
"image": "sonarqube:7.5-community",
"memory": "2048",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log-group}",
"awslogs-region": "${region}",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": {
"hostPort": "9000",
"protocol": "tcp",
"containerPort": "9000"
},
"environment": [
{
"name": "sonar.jdbc.password",
"value": "${password}"
},
{
"name": "sonar.jdbc.url",
"value": "${url}/${extra_url}"
},
{
"name": "sonar.jdbc.username",
"value": "${username}"
}
]
}
Relevant TF Blocks:
data "template_file" "task-def" {
template = "${file("${path.module}/task-def.json")}"
vars = {
log-group = "/ecs/${var.cluster_name}-${var.name}"
region = "${var.region}"
url = "jdbc:postgresql://${var.rds_url}${var.extra_url}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_ecs_task_definition" "task" {
family = "${var.name}"
network_mode = "bridge"
cpu = "1024"
memory = "2048"
execution_role_arn = "${var.ecs-exec-role}"
container_definitions = "${data.template_file.task-def.rendered}"
}
```
Terraform expects Json in a bit dirrerent format. After you fix this it will work:
Memory size and port numbers should be integer, not string
Terraform wants "array with oblects", not a JSON "object"
Variable $extra_url was not imported in template_file.task-def
Fixed version of task-def.json, tested on terraform v0.11.13 and provider.aws v2.9.0:
[
{
"name": "sonarqube"
},
{
"image": "sonarqube:7.5-community"
},
{
"memory": 2048
},
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "tyu",
"awslogs-region": "tyu",
"awslogs-stream-prefix": "ecs"
}
}
},
{
"portMappings": [
{
"hostPort": 9000
},
{
"protocol": "tcp"
},
{
"containerPort": 9000
}
]
},
{
"environment": [
{
"name": "sonar.jdbc.password",
"value": "${password}"
},
{
"name": "sonar.jdbc.url",
"value": "${url}/${extra_url}"
},
{
"name": "sonar.jdbc.username",
"value": "${username}"
}
]
}
]
Fixed version of template_file.task-def:
data "template_file" "task-def" {
template = "${file("${path.module}/task-def.json")}"
vars = {
log-group = "/ecs/${var.cluster_name}-${var.name}"
region = "${var.region}"
url = "jdbc:postgresql://${var.rds_url}${var.extra_url}"
username = "${var.username}"
password = "${var.password}"
extra_url = "${var.extra_url}"
}
}

Resources