I'm really new to Polars.
After running this code
fn read_csv() -> Result<(), PolarsError> {
println!("Hello, polars! 🐯");
let df = CsvReader::from_path("./test_data/tar-data/csv/unziped/body.csv")?
.has_header(false)
.finish()?;
let df = df.lazy().select([
col("column_15").str().split("|").alias("origin"),
col("column_16").str().split("|").alias("destination"),
]);
let mut df = df.collect()?;
println!("Schema {:?}", df.schema());
println!("{:?}", df);
Ok(())
}
I have two columns with lists like bellow.
Each column is a List of utf8.
Schema Schema:
name: origin, data type: List(Utf8)
name: destination, data type: List(Utf8)
shape: (10, 2)
┌───────────────────────┬───────────────────────┐
│ origin ┆ destination │
│ --- ┆ --- │
│ list[str] ┆ list[str] │
╞═══════════════════════╪═══════════════════════╡
│ ["JOI", "GRU", "DFW"] ┆ ["VCP", "DFW", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "ATL"] ┆ ["GRU", "ATL", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "MEX"] ┆ ["GRU", "MEX", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "MCO"] ┆ ["GRU", "MCO", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ... ┆ ... │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "IAH"] ┆ ["VCP", "IAH", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "ORD"] ┆ ["VCP", "ORD", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "JFK"] ┆ ["GRU", "JFK", "SLC"] │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ ["JOI", "GRU", "EWR"] ┆ ["GRU", "EWR", "SLC"] │
└───────────────────────┴───────────────────────┘
Now I need to zip this data info a single column (or Series in Polaris terms) in a form of list of structs.
My goal is to save the DataFrame as JSON with following structure
{
"MyData": [
[
{
"Origin": "JOI",
"Destination": "VCP"
},
{
"Origin": "GRU",
"Destination": "DFW"
},
{
"Origin": "DFW",
"Destination": "SLC"
}
],
[
{
"Origin": "JOI",
"Destination": "GRU"
},
{
"Origin": "GRU",
"Destination": "ATL"
},
{
"Origin": "ATL",
"Destination": "SLC"
}
],
......
]
}
Which is actually a kind of
name: MyData, data type: List(List(Struct([Field { name: "origin", dtype: Utf8 }, Field { name: "destination", dtype: Utf8 }])))
I was trying to approach this with apply, fold_exprs, etc, but with no luck.
So, actually my question is how to create a column with list of predefined structs from N columns with list of data?
Unfortunately I think Polars is not the best tool for this job. It makes handling list columns somewhat awkward, especially when you want to include structs in the mix.
Polars's CsvReader is useful, and then I think standard Rust can handle this just fine.
data.csv:
one,two
JOI|GRU|DFW,VCP|DFW|SLC
JOI|GRU|ATL,GRU|ATL|SLC
main.rs:
use polars::prelude::*;
use serde::Serialize;
#[derive(Debug, Serialize)]
#[serde(rename_all = "PascalCase")]
struct Flight {
origin: String,
destination: String,
}
fn read_csv() -> Result<Vec<Vec<Flight>>, PolarsError> {
let df = CsvReader::from_path("src/data.csv")?
.has_header(true)
.finish()?;
let data = df
.column("one")?
.utf8()?
.into_iter()
.zip(df.column("two")?.utf8()?)
.filter_map(|(s1, s2)| {
let s1 = s1?;
let s2 = s2?;
Some(
s1.split('|')
.zip(s2.split('|'))
.map(|(src, dst)| Flight {
origin: src.into(),
destination: dst.into(),
})
.collect::<Vec<_>>(),
)
})
.collect::<Vec<_>>();
Ok(data)
}
fn main() {
let data = read_csv().expect("problem reading dataframe");
let out = serde_json::to_string_pretty(&data).expect("problem serializing");
println!("{}", out);
}
Result:
[
[
{
"Origin": "JOI",
"Destination": "VCP"
},
{
"Origin": "GRU",
"Destination": "DFW"
},
{
"Origin": "DFW",
"Destination": "SLC"
}
],
[
{
"Origin": "JOI",
"Destination": "GRU"
},
{
"Origin": "GRU",
"Destination": "ATL"
},
{
"Origin": "ATL",
"Destination": "SLC"
}
]
]
Related
I created a task in Snowflake with Terraform. It creates it as expected and the new Task shows in both Snowflake and the .tfstate. When I try and update the task (i.e. change the schedule) and apply the changes with terraform apply, Terraform tells me:
│ Error: error retrieving root task TASK_MO: failed to locate the root node of: []: sql: no rows in result set
│
│ with snowflake_task.load_from_s3["MO"],
│ on main.tf line 946, in resource "snowflake_task" "load_from_s3":
│ 946: resource "snowflake_task" "load_from_s3" {
I did this just after creation, so no manual changes were made in Snowflake.
My assumption is that it can't find the actual task in Snowflake.
My resource
resource "snowflake_task" "load_from_s3" {
for_each = snowflake_stage.all
name = "TASK_${each.key}"
database = snowflake_database.database.name
schema = snowflake_schema.load_schemas["SRC"].name
comment = "Task to copy the ${each.key} messages from S3"
schedule = "USING CRON 0 7 * * * UTC"
sql_statement = "COPY into ${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${each.key} from (select ${local.stages[each.key].fields}convert_timezone('UTC', current_timestamp)::timestamp_ntz,metadata$filename,metadata$file_row_number from #${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${each.key} (file_format => '${snowflake_database.database.name}.${snowflake_schema.load_schemas["SRC"].name}.${snowflake_file_format.generic.name}')) on_error=skip_file"
enabled = local.stages[each.key].is_enabled
lifecycle {
ignore_changes = [after]
}
}
The resource in .tfstate
{
"index_key": "MO",
"schema_version": 0,
"attributes": {
"after": "[]",
"comment": "Task to copy the MO messages from S3",
"database": "ICEBERG",
"enabled": true,
"error_integration": "",
"id": "ICEBERG|SRC|TASK_MO",
"name": "TASK_MO_FNB",
"schedule": "USING CRON 0 8 * * * UTC",
"schema": "SRC",
"session_parameters": null,
"sql_statement": "COPY into ICEBERG.SRC.MO from (select $1,convert_timezone('UTC', current_timestamp)::timestamp_ntz,metadata$filename,metadata$file_row_number from #ICEBERG.SRC.MO (file_format =\u003e 'ICEBERG.SRC.GENERIC')) on_error=skip_file",
"user_task_managed_initial_warehouse_size": "",
"user_task_timeout_ms": null,
"warehouse": "",
"when": ""
},
"sensitive_attributes": [],
"private": "bnVsbA==",
"dependencies": [
"snowflake_database.database",
"snowflake_file_format.generic",
"snowflake_schema.load_schemas",
"snowflake_stage.all"
]
},
The query that is being ran on Snowflake that (I guess) should identify the existing task. This query returns indeed zero items (which corresponds with the error message from Terraform).
SHOW TASKS LIKE '[]' IN SCHEMA "ICEBERG"."SRC"
Does anyone know what I can do to be able to update the task with Terraform?
Thanks, Chris
The issue is reported here - Existing Task in plan & apply change & error #1071. Upgrading the Provider Version to snowflake-labs/snowflake 0.37.0 should resolve the issue.
I am trying to create multiple cloudwatch alarms using input from a json file or json locally declared. Either is fine. I've removed the list of
The problem is that this is within a module, so I cannot use file.auto.tfvars.json which is how I previously did this.
File structure:
.
├── cloudwatch.tf
├── cloudwatchalarms.json
└── locals.tf
What I've tried so far:
Use jsondecode to bring in the cloudwatchalarms.json and loop over it in the following way:
cloudwatch.tf
# creates alarms based on what exists within cloudwatchalarms.auto.tfvars.json
resource "aws_cloudwatch_metric_alarm" "alarms" {
for_each = { for alarm in local.all_alarms : alarm => alarm }
alarm_name = "${var.awsResourceName}-${each.value.Identifier}"
namespace = each.value.Namespace
comparison_operator = each.value.ComparisonOperator
evaluation_periods = each.value.EvaluationPeriods
statistic = each.value.Statistic
treat_missing_data = each.value.TreatMissingData
threshold = each.value.Threshold
period = each.value.Period
metric_name = each.value.MetricName
dimensions = {
EcsServiceName = var.awsResourceName
EcsClusterName = var.ecs_cluster_name
}
}
locals.tf
# create locals to pull in cloudwatchalarms.json
locals {
cloudwatchAlarms = jsondecode(file("${path.module}/cloudwatchalarms.json"))
# loop over the cloudwatchalarms json structure
all_alarms = [for alarms in local.cloudwatchAlarms.cloudwatchAlarms : alarms.Identifier]
}
cloudwatchalarms.json
{
"cloudwatchAlarms": [
{
"Identifier": "ServiceMemoryUtilisation",
"Namespace": "AWS/ECS",
"MetricName": "MemoryUtilization",
"Statistic": "Average",
"Threshold": 90,
"Period": 60,
"EvaluationPeriods": 5,
"ComparisonOperator": "GreaterThanThreshold",
"TreatMissingData": "missing"
},
{
"Identifier": "ServiceCPUUtilisation",
"Namespace": "AWS/ECS",
"MetricName": "CPUUtilization",
"Statistic": "Average",
"Threshold": 90,
"Period": 60,
"EvaluationPeriods": 5,
"ComparisonOperator": "GreaterThanThreshold",
"TreatMissingData": "missing"
}
]
}
}
When using this method I get the following errors for every attribute:
Error: Unsupported attribute
on .terraform/modules/terraform-ecs-monitoring/cloudwatch.tf line 4, in resource "aws_cloudwatch_metric_alarm" "alarms":
4: alarm_name = "${var.awsResourceName}-${each.value.Identifier}"
|----------------
| each.value is "TaskStability"
This value does not have any attributes.
The second method I tried using was declaring the json structure in locals
cloudwatch.tf
# creates alarms based on what exists within cloudwatchalarms.auto.tfvars.json
resource "aws_cloudwatch_metric_alarm" "alarms" {
for_each = { for alarm in local.cloudwatchAlarms : alarm.Identifier => alarm }
alarm_name = "${var.awsResourceName}-${each.value.Identifier}"
namespace = each.value.Namespace
comparison_operator = each.value.ComparisonOperator
evaluation_periods = each.value.EvaluationPeriods
statistic = each.value.Statistic
treat_missing_data = each.value.TreatMissingData
threshold = each.value.Threshold
period = each.value.Period
metric_name = each.value.MetricName
dimensions = {
EcsServiceName = var.awsResourceName
EcsClusterName = var.ecs_cluster_name
}
}
locals.tf
locals {
cloudwatchAlarms = {
"cloudwatchAlarms": [
{
"Identifier": "ServiceMemoryUtilisation",
"Namespace": "AWS/ECS",
"MetricName": "MemoryUtilization",
"Statistic": "Average",
"Threshold": 90,
"Period": 60,
"EvaluationPeriods": 5,
"ComparisonOperator": "GreaterThanThreshold",
"TreatMissingData": "missing"
},
{
"Identifier": "ServiceCPUUtilisation",
"Namespace": "AWS/ECS",
"MetricName": "CPUUtilization",
"Statistic": "Average",
"Threshold": 90,
"Period": 60,
"EvaluationPeriods": 5,
"ComparisonOperator": "GreaterThanThreshold",
"TreatMissingData": "missing"
}
]
}
}
I then get this error when trying this method:
Error: Unsupported attribute
on .terraform/modules/terraform-ecs-monitoring/cloudwatch.tf line 3, in resource "aws_cloudwatch_metric_alarm" "alarms":
3: for_each = { for alarm in local.cloudwatchAlarms : alarm.Identifier => alarm }
This value does not have any attributes.
Throw to stop pipeline
Any help or pointing in the right direction would be great. Thanks.
You are closer to correctly constructing the set(object) in the first attempt, and restructuring the data in the second attempt. When combining the two, we arrive at:
locals {
cloudwatchAlarms = jsondecode(file("${path.module}/cloudwatchalarms.json"))
}
resource "aws_cloudwatch_metric_alarm" "alarms" {
for_each = { for alarm in local.cloudwatchAlarms : alarm.Identifier => alarm }
...
}
and your for_each meta-argument iterates over the set(object) (implicitly converted from the tuple type) and retrieves the value for the Identifier key in each object and the entire object. Your Map constructor in the for expression assigns the former to the key and the latter to the value, which when iterated upon will give you the desired behavior for the body of your resource block as it currently exists in the question.
Note also that:
alarm_name = "${var.awsResourceName}-${each.value.Identifier}"
can be simplified to:
alarm_name = "${var.awsResourceName}-${each.key}"
I sorted this with the following configuration:
locals.tf
# create locals to pull in cloudwatchalarms.json
locals {
cloudwatch_alarms = jsondecode(file("${path.module}/cloudwatchalarms.json"))
# loop over the cloudwatchalarms json structure
all_alarms = { for alarms in local.cloudwatch_alarms.cloudwatchAlarms : alarms.Identifier => alarms }
}
cloudwatchalarms.json
{
"cloudwatchAlarms": [
{
"Identifier": "ServiceMemoryUtilisation",
"Namespace": "AWS/ECS",
"MetricName": "MemoryUtilization",
"Statistic": "Average",
"Threshold": 90,
"Period": 60,
"EvaluationPeriods": 5,
"ComparisonOperator": "GreaterThanThreshold",
"TreatMissingData": "missing"
},
{
"Identifier": "ServiceCPUUtilisation",
"Namespace": "AWS/ECS",
"MetricName": "CPUUtilization",
"Statistic": "Average",
"Threshold": 90,
"Period": 60,
"EvaluationPeriods": 5,
"ComparisonOperator": "GreaterThanThreshold",
"TreatMissingData": "missing"
}
]
}
}
cloudwatch.tf
# creates alarms based on what exists within cloudwatchalarms.auto.tfvars.json
resource "aws_cloudwatch_metric_alarm" "alarms" {
for_each = { for alarm in local.all_alarms : alarm.Identifier => alarm }
alarm_name = "${var.awsResourceName}-${each.value.Identifier}"
namespace = each.value.Namespace
comparison_operator = each.value.ComparisonOperator
evaluation_periods = each.value.EvaluationPeriods
statistic = each.value.Statistic
treat_missing_data = each.value.TreatMissingData
threshold = each.value.Threshold
period = each.value.Period
metric_name = each.value.MetricName
dimensions = {
EcsServiceName = var.awsResourceName
EcsClusterName = var.ecs_cluster_name
}
}
The fix was changing this line in locals.tf to the following:
all_alarms = { for alarms in local.cloudwatch_alarms.cloudwatchAlarms : alarms.Identifier => alarms }
So this is now a map.
Then in cloudwatch.tf changing the for_each to the following:
for_each = { for alarm in local.all_alarms : alarm.Identifier => alarm }
Error message
aws_ssm_patch_baseline.baseline: Modifying... [id=pb-041e9a4a9c8723c10]
╷
│ Error: error updating SSM Patch Baseline (pb-041e9a4a9c8723c10): ValidationException: Unknown Filter Key: MSRC_SEVERITY
│ status code: 400, request id: b55aa4df-b782-4e40-a3a7-5637ac903bdd
│
│ with aws_ssm_patch_baseline.baseline,
│ on ssm.tf line 1, in resource "aws_ssm_patch_baseline" "baseline":
│ 1: resource "aws_ssm_patch_baseline" "baseline" {```
Terraform file content
patch_filter {
key = "MSRC_SEVERITY"
values = var.patch_severity
}
Operating system for patch baseline is AMAZON_LINUX
For console, similar choice is available for selection in drop down field
Different operating systems support different filter properties. Unfortunately AWS doesn't seem to tell you exactly which operating systems support which properties but the API docs list the valid values of properties and then you can run the following command to check whether they are valid for that operating system and what options are available:
$ aws ssm describe-patch-properties --operating-system AMAZON_LINUX --property MSRC_SEVERITY
An error occurred (ValidationException) when calling the DescribePatchProperties operation: Property MSRC_SEVERITY is not supported for operating system AMAZON_LINUX
$ aws ssm describe-patch-properties --operating-system AMAZON_LINUX --property SEVERITY
{
"Properties": [
{
"Name": "Critical"
},
{
"Name": "Important"
},
{
"Name": "Low"
},
{
"Name": "Medium"
}
]
}
For reference, MSRC is the Microsoft Security Response Center so it makes sense that this only applies to Windows systems.
Wondering if anyone has ran tackled it. So, I need to be able to generate list of egress CIDR blocks that is currently available for listing over an API. Sample output is the following:
[
{
"description": "blahnet-public-acl",
"metadata": {
"broadcast": "192.168.1.191",
"cidr": "192.168.1.128/26",
"ip": "192.168.1.128",
"ip_range": {
"start": "192.168.1.128",
"end": "192.168.1.191"
},
"netmask": "255.255.255.192",
"network": "192.168.1.128",
"prefix": "26",
"size": "64"
}
},
{
"description": "blahnet-public-acl",
"metadata": {
"broadcast": "192.168.160.127",
"cidr": "192.168.160.0/25",
"ip": "192.168.160.0",
"ip_range": {
"start": "192.168.160.0",
"end": "192.168.160.127"
},
"netmask": "255.255.255.128",
"network": "192.168.160.0",
"prefix": "25",
"size": "128"
}
}
]
So, I need convert it to Azure Firewall
###############################################################################
# Firewall Rules - Allow Access To TEST VMs
###############################################################################
resource "azurerm_firewall_network_rule_collection" "azure-firewall-azure-test-access" {
for_each = local.egress_ips
name = "azure-firewall-azure-test-rule"
azure_firewall_name = azurerm_firewall.public_to_test.name
resource_group_name = var.resource_group_name
priority = 105
action = "Allow"
rule {
name = "test-access"
source_addresses = local.egress_ips[each.key]
destination_ports = ["43043"]
destination_addresses = ["172.16.0.*"]
protocols = [ "TCP"]
}
}
So, bottom line is that allowed IP addresses have to be a list of strings for the "source_addresses" parameter, such as this:
["192.168.44.0/24","192.168.7.0/27","192.168.196.0/24","192.168.229.0/24","192.168.138.0/25",]
I configured data_sources.tf file:
data "http" "allowed_networks_v1" {
url = "https://testapiserver.com/api/allowed/networks/v1"
}
...and in locals.tf, I need to configure
locals {
allowed_networks_json = jsondecode(data.http.allowed_networks_v1.body)
egress_ips = ...
}
...and that's where I am stuck. How can parse that data in locals.tf file so I can reference it from within TF ?
Thanks a metric ton!!
I'm assuming that the list of string you are referring to are the objects under: metadata.cidr we can extract that with a for loop in a local, and also do a distinct just in case we get duplicates.
Here is a sample code
data "http" "allowed_networks_v1" {
url = "https://raw.githack.com/heldersepu/hs-scripts/master/json/networks.json"
}
locals {
allowed_networks_json = jsondecode(data.http.allowed_networks_v1.body)
distinct_cidrs = distinct(flatten([
for key, value in local.allowed_networks_json : [
value.metadata.cidr
]
]))
}
output "data" {
value = local.distinct_cidrs
}
and here is the output of a plan on that:
terraform plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
Terraform will perform the following actions:
Plan: 0 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ data = [
+ "192.168.1.128/26",
+ "192.168.160.0/25",
]
Here is the code for your second sample:
data "http" "allowed_networks_v1" {
url = "https://raw.githack.com/akamalov/testfile/master/networks.json"
}
locals {
allowed_networks_json = jsondecode(data.http.allowed_networks_v1.body)
distinct_cidrs = distinct(flatten([
for key, value in local.allowed_networks_json.egress_nat_ranges : [
value.metadata.cidr
]
]))
}
output "data" {
value = local.distinct_cidrs
}
In terraform I am attempting to pass a variable (list) to a module that we built. This variable needs to be used within a aws_ecs_task_definition resource in the container_definitions.
Right now I am just starting with an empty default list defined as a variable:
variable "task_enviornment" {
type = "list"
default = []
}
My ECS task definition looks like this:
resource "aws_ecs_task_definition" "ecs_task_definition" {
family = "${var.ecs_family}"
network_mode = "awsvpc"
task_role_arn = "${aws_iam_role.iam_role.arn}"
execution_role_arn = "${data.aws_iam_role.iam_ecs_task_execution_role.arn}"
requires_compatibilities = ["FARGATE"]
cpu = "${var.fargate_cpu}"
memory = "${var.fargate_memory}"
container_definitions =<<DEFINITION
[
{
"cpu": ${var.fargate_cpu},
"image": "${var.app_image}",
"memory": ${var.fargate_memory},
"name": "OURNAME",
"networkMode": "awsvpc",
"environment": "${jsonencode(var.task_enviornment)}",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group" : "${aws_cloudwatch_log_group.fargate-logs.name}",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "demo"
}
},
"portMappings": [
{
"containerPort": ${var.app_port},
"hostPort": ${var.app_port}
}
]
}
]
DEFINITION
}
The part I am having a problem with with the "environment" part:
"environment": "${jsonencode(var.task_enviornment)}",
I have tried a few different ways to get this to work.
If I do "environment": "${jsonencode(var.task_enviornment)}",
I get ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field ContainerDefinition.Environment of type []*ecs.KeyValuePair
If I do "environment": "${var.task_enviornment}", or "environment": ["${var.task_enviornment}"],
I get At column 1, line 1: output of an HIL expression must be a string, or a single list (argument 8 is TypeList) in:
Then it just outputs the contents of container_definitions
I did also try adding default values and I was getting similar error messages. However I do need to be able to handle no values being sent in, so an empty list.
variable "task_enviornment" {
type = "list"
default = [
{
"name" = "BUCKET",
"value" = "test"
}
]
}
After a lot of investigation and a fresh set of eyes looking at this figured out the solution. I am unsure why this fixes it, and I feel like this is likely a bug.
Needed to do 2 things to fix this.
Remove type = "list" from the variable definition.
variable "task_environment" {
default = []
}
Remove the quotes when using the variable:
"environment": ${jsonencode(var.task_environment)},
The below solution should work
in variable.tf
variable "app_environments_vars" {
type = list(map(string))
default = []
description = "environment variable needed by the application"
default = [
{
"name" = "BUCKET",
"value" = "test"
},{
"name" = "BUCKET1",
"value" = "test1"
}]
and in your task definition, you can use ${jsonencode(var.app_environments_vars)} similar to
container_definitions =<<DEFINITION
[
{
"cpu": ${var.fargate_cpu},
"image": "${var.app_image}",
"memory": ${var.fargate_memory},
"name": "OURNAME",
"networkMode": "awsvpc",
"environment": ${jsonencode(var.app_environments_vars)},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group" : "${aws_cloudwatch_log_group.fargate-logs.name}",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "demo"
}
},
"portMappings": [
{
"containerPort": ${var.app_port},
"hostPort": ${var.app_port}
}
]
}
]
My guess is that you are trying to use the type "map" instead of lists, as showed above, the removal from type specification will work.
Example:
List_sample = [1,2,3]
Map_sample = { key_name = "value" }
Reference: Terraform - Type Constraints