Terraform azurerm_windows_function_app ip_restrictions issues - azure

Using resource azurerm_windows_function_app, I am trying to use the ip_restriction block in site_config however upon plan/apply it errors as apparently optional values are required.
All i want to achieve is to Deny all traffic unless from a network/subnet.
The documentation (https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_function_app) states this is possible.
I am using the latest provider version and the latest Terraform version.
Terraform v1.3.7 on windows_amd64 + provider registry.terraform.io/hashicorp/azurerm v3.38.0 + provider registry.terraform.io/hashicorp/template v2.2.0
Code block uploaded to here https://codebeautify.org/cs/c433a9
Doing an apply on the above I get
Error: Incorrect attribute value type │ │ on main.tf line 181, in resource "azurerm_windows_function_app" "windows_function_app": │ 181: ip_restriction = [ { │ 182: action = "Deny" │ 183: virtual_network_subnet_id = data.terraform_remote_state.netsec_outputs.outputs.vnet_subnets_info["APIM"].id │ 184: name = "APIM Access" │ 185: priority = 1 │ 186: } ] │ ├──────────────── │ │ data.terraform_remote_state.netsec_outputs.outputs.vnet_subnets_info["APIM"].id is "/subscriptions/0000000--00000000--00000000/resourceGroups/prt-sit-2-netsec- 01/providers/Microsoft.Network/virtualNetworks/prt-sit-2-vnet/subnets/apim" │ │ Inappropriate value for attribute "ip_restriction": element 0: attributes "headers", "ip_address", and "service_tag" are required
What i expect to happen is it completes a plan and wishes to add the ip_restriction block.

Worked around my issue. If you use a dynamic block with only the elements you wish to use it passes and validates

Related

Parse yaml file to run a for_each loop in terraform for multiple resources creation

I have the following config.yaml file
entities:
admins:
someone#somewhere.com
viewers:
anotherone#somewhere.com
And I want to create vault entities based on the users below the yaml nodes admins and viewers
locals {
config = yamldecode(file("config.yaml"))
admins = keys(local.config["entities"]["admins"]...)
viewers = keys(local.config["entities"]["viewers"]...)
}
resource "vault_identity_entity" "admins" {
for_each = toset(local.admins)
name = each.key
policies = ["test"]
metadata = {
foo = "bar"
}
}
resource "vault_identity_entity" "viewers" {
for_each = toset(local.viewers)
name = each.key
policies = ["test"]
metadata = {
foo = "bar"
}
}
The code above fails with:
│ Error: Invalid expanding argument value
│
│ on ../../../../entities/main.tf line 3, in locals:
│ 3: admins = keys(local.config["entities"]["admins"]...)
│ ├────────────────
│ │ while calling keys(inputMap)
│ │ local.config["entities"]["admins"] is "someone#somewhere.com"
│
│ The expanding argument (indicated by ...) must be of a tuple, list, or set
│ type.
╵
╷
│ Error: Invalid index
│
│ on ../../../../entities/main.tf line 4, in locals:
│ 4: viewers = keys(local.config["entities"]["viewers"]...)
│ ├────────────────
│ │ local.config["entities"] is object with 2 attributes
│
│ The given key does not identify an element in this collection value.
How should I structure my yaml file?
It seems like you want the YAML to be a hash of a hash of a list of strings. You can restructure for that like:
entities:
admins:
- someone#somewhere.com
viewers:
- anotherone#somewhere.com
This will recast to map(map(list(string))) when yamldecode from a YAML format string to HCL2.
However, you are also attempting to convert the type with the ellipsis operator ... in your locals, and returning only the keys. I am unsure why you are doing that, and both should be removed:
admins = local.config["entities"]["admins"]
viewers = local.config["entities"]["viewers"]
Afterwards, you can convert to a set type with toset like you are doing already, and then leverage that within the for_each meta-argument as per usual:
for_each = toset(local.admins)
for_each = toset(local.viewers)
This will result in the desired behavior.

Terraform output being flagged as sensitive

We have created some terraform stacks for different domains, like network stack for vpc, rds stack for rds stuff, etc.
And, for instance, the rds stack depends on the network stack to get values from the outputs:
Output from network stack:
output "public_subnets" {
value = aws_subnet.public.*.id
}
output "private_subnets" {
value = aws_subnet.private.*.id
}
output "data_subnets" {
value = aws_subnet.data.*.id
}
... an so on
And the rds stack will tap on the outputs
data "tfe_outputs" "networking" {
organization = "my-tf-cloud-org"
workspace = "network-production-eucentral1"
}
But when I try to use the output:
│
│ on main.tf line 20, in module "db":
│ 20: base_domain = data.tfe_outputs.dns.values.fqdn
│ ├────────────────
│ │ data.tfe_outputs.dns.values has a sensitive value
│
│ This object does not have an attribute named "fqdn".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 22, in module "db":
│ 22: subnets = data.tfe_outputs.networking.values.data_subnets
│ ├────────────────
│ │ data.tfe_outputs.networking.values has a sensitive value
│
│ This object does not have an attribute named "data_subnets".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 23, in module "db":
│ 23: vpc_id = data.tfe_outputs.networking.values.vpc_id
│ ├────────────────
│ │ data.tfe_outputs.networking.values has a sensitive value
│
│ This object does not have an attribute named "vpc_id".
This was working before; it started all of a sudden.
I tried adding the nonsensitive cast, but it does not work.
Any idea?
Update:
I manage to fix the issue. I'm using terraform cloud with a remote state. If you go to your workspace_with_the_output general settings in tf cloud you will find an option called "Remote state sharing".
I added my workspace_which_consume_state on that list and now it's working. Hopefully, this helps

Terraform plan fails due to content argument missing in local_file resource

I have been testing out something using the terraform for_each loop method and ran into this error with the local_file resource.
$ cat main.tf
resource "local_file" "pet" {
filename = each.value
for_each = var.filename
}
$ cat variables.tf
variable "filename" {
type = set(string)
default = [
"/home/user/pets.txt",
"/home/user/dogs.txt",
"/home/user/cats.txt"
]
}
when I run terraform plan after init, I see the following errors:
$ terraform plan
╷
│ Error: Invalid combination of arguments
│
│ with local_file.pet,
│ on main.tf line 1, in resource "local_file" "pet":
│ 1: resource "local_file" "pet" {
│
│ "content_base64": one of `content,content_base64,sensitive_content,source` must be specified
╵
╷
│ Error: Invalid combination of arguments
│
│ with local_file.pet,
│ on main.tf line 1, in resource "local_file" "pet":
│ 1: resource "local_file" "pet" {
│
│ "source": one of `content,content_base64,sensitive_content,source` must be specified
╵
╷
│ Error: Invalid combination of arguments
│
│ with local_file.pet,
│ on main.tf line 1, in resource "local_file" "pet":
│ 1: resource "local_file" "pet" {
│
│ "content": one of `content,content_base64,sensitive_content,source` must be specified
╵
╷
│ Error: Invalid combination of arguments
│
│ with local_file.pet,
│ on main.tf line 1, in resource "local_file" "pet":
│ 1: resource "local_file" "pet" {
│
│ "sensitive_content": one of `content,content_base64,sensitive_content,source` must be specified
╵
From the documentation, I can see argument content is optional:
so I am confused with the above error.
Having encountered the same issue while using the count and for_each meta arguments, I resorted to creating a "content" variable with some dummy text and the errors disappeared afterwards.
But why does hashicorp say they (content, sensitive_content etc) are optional, if without them, the config won't run successfully?
I hope this helps!
As the error message says:
one of content,content_base64,sensitive_content,source must be specified
The documentation states for each of content, content_base64, sensitive_content, and source, that they are optional but also that they do conflict with the other three of them, and it also does not specify a default value.
In consequence, you need to specify exactly one of these four arguments.
Also it makes much sense, as you need to define the content of the file you want to be created.

Terraform Datadog: trace_service_definition does not accept block for "query" or "formula" even tho it is in documentation

Am i doing something wrong?
widget {
widget_layout {
x = 0
y = 47
width = 50
height = 25
}
timeseries_definition {
request {
formula {
formula_expression = "query1 * 100"
alias = "Total Session Capacity"
}
query {
metric_query {
data_source = "metrics"
query = "sum:.servers.available{$region,$stage,$service-name} by {availability-zone}"
name = "query1"
}
}
}
}
}
Documentation links:
https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/dashboard#nestedblock--widget--group_definition--widget--timeseries_definition--request--query
https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/dashboard#nested-schema-for-widgetgroup_definitionwidgettimeseries_definitionrequestformula
$terraform --version
Terraform v1.0.11
on darwin_amd64
+ provider registry.terraform.io/datadog/datadog v2.21.0
$terraform validate
╷
│ Error: Unsupported block type
│
│ on weekly_ops_dashboard.tf line 152, in resource "datadog_dashboard" "weekly_ops":
│ 152: formula {
│
│ Blocks of type "formula" are not expected here.
╵
╷
│ Error: Unsupported block type
│
│ on weekly_ops_dashboard.tf line 156, in resource "datadog_dashboard" "weekly_ops":
│ 156: query {
│
│ Blocks of type "query" are not expected here.
You seam to be using an old version of the Datadog Terraform plugin:
provider registry.terraform.io/datadog/datadog v2.21.0
Version 2.21.0 of their plugin doesn't mention formula.
Either upgrade to the newest version or use whatever is available in 2.21.0

Terrafrom reading yaml file and assign local varible

I am trying to read the yaml file and assign value to local variable, below code giving `Invalid index' error. how to fix this error message?
YAML file server.yaml
vm:
- name: vmingd25
- system_cores: 4
Code block
locals {
vm_raw = yamldecode(file("server.yaml"))["vm"]
vm_name= local.vm_raw["name"]
vm_cpu = local.vm_raw["system_cores"]
}
Error message
╷
│ Error: Invalid index
│
│ on main.tf line 16, in locals:
│ 16: vm_name= local.vm_raw["name"]
│ ├────────────────
│ │ local.vm_raw is tuple with 10 elements
│
│ The given key does not identify an element in this collection value: a number is required.
╵
╷
│ Error: Invalid index
│
│ on main.tf line 17, in locals:
│ 17: vm_cpu = local.vm_raw["system_cores"]
│ ├────────────────
│ │ local.vm_raw is tuple with 10 elements
│
│ The given key does not identify an element in this collection value: a number is required.
Your YAML is equivalent to the following JSON:
{
"vm": [
{
"name": "vmingd25"
},
{
"system_cores": 4
}
]
}
As you can see the vm element is a list of objects because you are using the - character. This means you need to either:
Change you YAML to remove the - list definition. e.g.
vm:
name: vmingd25
system_cores: 4
This would turn the list into a dictionary so you could index with the keys as you have done in your question. OR
If you cannot change the YAML then you will need to index with an integer. This might work if your YAML never changes but is definitely not recommended.

Resources