I have recently created a cosmos database in Terraform and I am trying to pass its database connection string as a secret in keyvault, but when doing this I get the following error:
Error: Incorrect attribute value type │ │ on keyvault.tf line 282, in resource "azurerm_key_vault_secret" "Authentication_Server_Cosmos_DB_ConnectionString": │ 282: value = azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings │ ├──────────────── │ │ azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings has a sensitive value │ │ Inappropriate value for attribute "value": string required.
I have also tried to use the sensitive argument but key vault does not like that argument also I cant find any documentation on how to do this. On the Terraform website it just has it listed as an attribute you can call on.
My Terraform Secret code is bellow, I wont put all my code in here as Stack overflow doesn't like the amount of code that I have.
So please presume, I am using the latest Azurerm agent, and all the rest of my code is correct its just the secret part that's not working.
resource "azurerm_key_vault_secret" "Authentication_Server_Cosmos_DB_ConnectionString" { //Auth Server Cosmos Connection String Secret
name = "AuthenticationServerCosmosDBConnectionString"
value = azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings
key_vault_id = azurerm_key_vault.nscsecrets.id
depends_on = [
azurerm_key_vault_access_policy.client,
azurerm_key_vault_access_policy.service_principal,
azurerm_cosmosdb_account.nsauthsrvcosmosdb,
]
}
There are 4 connection Strings inside the value that you have given and also the values are of type secure_string . So you need to convert them to String Value and apply index for which value you want to store in the keyvault.
For Storing all the the 4 Connection Strings you can use below :
resource "azurerm_key_vault_secret" "example" {
count = length(azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings)
name = "AuthenticationServerCosmosDBConnectionString-${count.index}"
value = tostring("${azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings[count.index]}")
key_vault_id = azurerm_key_vault.example.id
}
Outputs:
If you want to store only one connection string then you can use index as per your requirement (for example : if you want to store the first connection_string then use '0' as index and like wise 1/2/3 .) in the below code:
resource "azurerm_key_vault_secret" "example1" {
name = "AuthenticationServerCosmosDBConnectionString"
value = tostring("${azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings[0]}")
key_vault_id = azurerm_key_vault.example.id
}
Outputs:
Related
I am trying to add admin users to SQl Db in terraform where as ending up with syntax error There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Unsupported argument
│
│ on common_locals.tf line 171:
sql_db_login_name_map = {
"123#gmail.com" = "object_id 123"
"234#gmail.com" = "Object_Id 234"
"456#gmail.com" = "object _id 345"
"678#gmail.com" = "Object _id 567"
}
main.tf
resource "azurerm_sql_active_directory_administrator" "Sql_ad_admin" {
server_name = azurerm_sql_server.sql-db-server.name
resource_group_name = azurerm_resource_group.123_rg.name
for_each = local.sql_db_login_name_map
login = "${each.key}"
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = "${each.value}"
}
Could you please specify if i am missing something in mapping my values ?
I am writing a module to set up some servers on Hetzner and I want to enable the user to either
provide an already deployed ssh key using it's fingerprint as a variable
or add a new ssh-key by providing it's path as a variable if no fingerprint has been provided
my variables.tf looks like this:
variable "ssh_key" {
# create new key from local file
default = "~/.ssh/id_rsa.pub"
}
variable "ssh_key_existing_fingerprint" {
# if there's already a key on Hetzner, use it via it's fingerprint
type = string
default = null
}
my main.tf:
# Obtain ssh key data
data "hcloud_ssh_key" "existing" {
fingerprint = var.ssh_key_existing_fingerprint
}
resource "hcloud_ssh_key" "default" {
name = "servers default ssh key"
public_key = file("${var.ssh_key}")
}
resource "hcloud_server" "server" {
name = "${var.server_name}"
server_type = "${var.server_flavor}"
image = "${var.server_image}"
location = "${var.server_location}"
ssh_keys = [var.ssh_key_existing_fingerprint ? data.hcloud_ssh_key.existing.id : hcloud_ssh_key.default.id]
The idea was to only obtain the data source ssh key if the fingerprint is not empty and then add either the key from the data source or the local key as fallback.
However, it doesn't work like this:
The data source fails because an empty identifier is not allowed:
data.hcloud_ssh_key.existing: Reading...
╷
│ Error: please specify a id, a name, a fingerprint or a selector to lookup the sshkey
│
│ with data.hcloud_ssh_key.existing,
│ on main.tf line 11, in data "hcloud_ssh_key" "existing":
│ 11: data "hcloud_ssh_key" "existing" {
How would one accomplish such a behavior?
in this case it's null
It can't be null. Null by default eliminates the fingerprint attribute. Thus you are literally executing hcloud_ssh_key without any attributes, explaining why you get your error:
# this is what you are effectively calling
data "hcloud_ssh_key" "existing" {
}
Either ensure that you have always non-null value, or provide id, name as alternatives when fingerprint is null.
update
Make it optional:
data "hcloud_ssh_key" "existing" {
count = var.ssh_key_existing_fingerprint == null ? 0 : 1
fingerprint = var.ssh_key_existing_fingerprint
}
I'm trying to use for_each with a terraform module creating datadog synthetic tests. The object names in an s3 bucket are listed and passed as the set for the for_each. The module reads the content of each file using the each.value passed in by the calling module as the key. I hardcoded the s3 object key value in the module during testing and it was working. When I attempt to call the module from main.tf, passing in the key name dynamically from the set it fails with the below error.
│ Error: Error in function call
│
│ on modules\Synthetics\trial2.tf line 7, in locals:
│ 7: servicedef = jsondecode(data.aws_s3_object.tdjson.body)
│ ├────────────────
│ │ data.aws_s3_object.tdjson.body is ""
│
│ Call to function "jsondecode" failed: EOF.
main.tf
data "aws_s3_objects" "serviceList" {
bucket = "bucketname"
}
module "API_test" {
for_each = toset(data.aws_s3_objects.serviceList.keys)
source = "./modules/Synthetics"
S3key = each.value
}
module
data "aws_s3_object" "tdjson" {
bucket = "bucketname"
key = var.S3key
}
locals {
servicedef = jsondecode(data.aws_s3_object.tdjson.body)
Keys = [for k,v in local.servicedef.Endpoints: k]
}
Any clues as to what's wrong here?
Thanks
Check out the note on the aws_s3_object data source:
The content of an object (body field) is available only for objects which have a human-readable Content-Type (text/* and application/json). This is to prevent printing unsafe characters and potentially downloading large amount of data which would be thrown away in favour of metadata.
Since it's successfully getting the data source (not throwing an error), but the body is empty, this is very likely to be your issue. Make sure that your S3 object has the Content-Type metadata set to application/json. Here's a Stack Overflow question/answer on how to do that via the CLI; you can also do it via the AWS console, API, or Terraform (if you created the object via Terraform).
EDIT: I found the other issue. Check out the syntax for using for_each with toset:
resource "aws_iam_user" "the-accounts" {
for_each = toset( ["Todd", "James", "Alice", "Dottie"] )
name = each.key
}
The important bit is that you should be using each.key instead of each.value.
I have a simple Terraform config to create secret in Azure keyvault.
provider "azurerm" {
features {}
}
data "azurerm_key_vault" "SomeApp-DEV" {
name = "SomeApp-DEV"
resource_group_name = "SomeApp"
}
resource "azurerm_key_vault_secret" "test-secret" {
name = "some-key"
value = "test value"
key_vault_id = data.azurerm_key_vault.SomeApp-DEV
}
After terraform plan I'm getting the following error:
Error: Incorrect attribute value type
on secret.tf line 13, in resource "azurerm_key_vault_secret" "test-secret":
13: key_vault_id = data.azurerm_key_vault.SomeApp-DEV
├────────────────
│ data.azurerm_key_vault.SomeApp-DEV is object with 17 attributes
Inappropriate value for attribute "key_vault_id": string required.
How to make it work? I don't know what this object with 17 attributes message even means?
When you access the exported attribute with the namespace data.<type>.<name>, then you are accessing the entire Map of exported attributes from that data (this is also true of exported attributes for resources). In this situation, you only want the String for the id, whose value is assigned to the key id in the Map of exported attributes:
resource "azurerm_key_vault_secret" "test-secret" {
name = "some-key"
value = "test value"
key_vault_id = data.azurerm_key_vault.SomeApp-DEV.id
}
and this will fix your issue.
I am trying to use terraform string function and string concatenation on a terraform tfvars variable. but when run the terraform plan it through the below exception
Error: A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Following is the terraform code
locals {
name_suffix = "${var.namespace != "" ? var.namespace : var.env}"
}
resource "azurerm_container_registry" "my_acr" {
name = "myacr${replace(name_suffix, "-", "")}"
location = "${azurerm_resource_group.location}"
resource_group_name = "${azurerm_resource_group.name}"
sku = "Basic"
admin_enabled = true
}
Here namespace value will be resolved at runtime.
Terraform version 0.12.7
it was a silly mistake. instead of name_suffix, I should have written it like local.name_suffix inside the acr resource
Had a similar issue when setting up Terraform configuration files for AWS Fargate.
Got the error below:
│ Error: Invalid reference
│
│ on ../ecs/main.tf line 72, in resource "aws_ecs_service" "aes":
│ 72: type = order_placement_type
│
│ A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
╵
╷
│ Error: Invalid reference
│
│ on ../ecs/main.tf line 73, in resource "aws_ecs_service" "aes":
│ 73: field = order_placement_field
│
│ A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
The issue was that I missed the var prefix for variables, so instead of this:
ordered_placement_strategy {
type = order_placement_type
field = order_placement_field
}
I corrected it to this:
ordered_placement_strategy {
type = var.order_placement_type
field = var.order_placement_field
}
That's all.
Another thing to check. Make sure you have the index specifier in the correct position.
I had the following code and ran into this problem:
data "cloudflare_origin_ca_root_certificate" "current" {
count = var.domain == null ? 0 : 1
algorithm = tls_private_key.privateKey[0].algorithm
}
resource "aws_acm_certificate" "cert" {
count = var.domain == null ? 0 : 1
#...
certificate_chain = data.cloudflare_origin_ca_root_certificate[0].current.cert_pem
}
Turns out I made the mistake of putting the [0] before the current selector instead of after. So I just had to change the certificate_chain line to the following:
certificate_chain = data.cloudflare_origin_ca_root_certificate.current[0].cert_pem