I am writing a module to set up some servers on Hetzner and I want to enable the user to either
provide an already deployed ssh key using it's fingerprint as a variable
or add a new ssh-key by providing it's path as a variable if no fingerprint has been provided
my variables.tf looks like this:
variable "ssh_key" {
# create new key from local file
default = "~/.ssh/id_rsa.pub"
}
variable "ssh_key_existing_fingerprint" {
# if there's already a key on Hetzner, use it via it's fingerprint
type = string
default = null
}
my main.tf:
# Obtain ssh key data
data "hcloud_ssh_key" "existing" {
fingerprint = var.ssh_key_existing_fingerprint
}
resource "hcloud_ssh_key" "default" {
name = "servers default ssh key"
public_key = file("${var.ssh_key}")
}
resource "hcloud_server" "server" {
name = "${var.server_name}"
server_type = "${var.server_flavor}"
image = "${var.server_image}"
location = "${var.server_location}"
ssh_keys = [var.ssh_key_existing_fingerprint ? data.hcloud_ssh_key.existing.id : hcloud_ssh_key.default.id]
The idea was to only obtain the data source ssh key if the fingerprint is not empty and then add either the key from the data source or the local key as fallback.
However, it doesn't work like this:
The data source fails because an empty identifier is not allowed:
data.hcloud_ssh_key.existing: Reading...
╷
│ Error: please specify a id, a name, a fingerprint or a selector to lookup the sshkey
│
│ with data.hcloud_ssh_key.existing,
│ on main.tf line 11, in data "hcloud_ssh_key" "existing":
│ 11: data "hcloud_ssh_key" "existing" {
How would one accomplish such a behavior?
in this case it's null
It can't be null. Null by default eliminates the fingerprint attribute. Thus you are literally executing hcloud_ssh_key without any attributes, explaining why you get your error:
# this is what you are effectively calling
data "hcloud_ssh_key" "existing" {
}
Either ensure that you have always non-null value, or provide id, name as alternatives when fingerprint is null.
update
Make it optional:
data "hcloud_ssh_key" "existing" {
count = var.ssh_key_existing_fingerprint == null ? 0 : 1
fingerprint = var.ssh_key_existing_fingerprint
}
Related
I have two folders with a few files in each folder
services
dns.tf
app
outputs.tf
In the dns.tf I have the following:
resource "cloudflare_record" "pgsql_master_record" {
count = var.pgsql_enabled ? 1 : 0
zone_id = data.cloudflare_zone.this.id
name = "${var.name}.pg.${var.jurisdiction}"
value = module.db[0].primary.ip_address.0.ip_address
type = "A"
ttl = 3600
}
resource "cloudflare_record" "redis_master_record" {
count = var.redis_enabled ? 1 : 0
zone_id = data.cloudflare_zone.this.id
name = "${var.name}.redis.${var.jurisdiction}"
value = module.redis[0].host
type = "A"
ttl = 3600
}
And in my app outputs.tf I'd like to add outputs for the above resources
output "psql_master_record" {
value = cloudflare_record.pgsql_master_record[*].hostname
}
output "redis_master_record" {
value = cloudflare_record.redis_master_record[*].hostname
}
But I keep getting this error:
A managed resource "cloudflare_record" "redis_master_record" has not been declared in the root module.
You can't do it.
Your dns.tf and outputs.tf should be in the same folder
Or as example, you can use data block with remote state
In Terraform, you can output values from a configuration using the output block. These outputs can then be referenced within the same configuration using interpolation syntax, or from another configuration using the terraform_remote_state data source.
Here's an example of how you might use the output block to output the value of an EC2 instance's ID:
resource "aws_instance" "example" {
# ...
}
output "instance_id" {
value = aws_instance.example.id
}
You can then reference the output value within the same configuration using "output.instance_id.value".
To use the output value from another configuration, you'll first need to create a data source for the remote state using the terraform_remote_state data source. Here's an example of how you might do that:
data "terraform_remote_state" "example" {
backend = "s3"
config {
bucket = "my-tf-state-bucket"
key = "path/to/state/file"
region = "us-west-2"
}
}
Then, you can reference the output value from the remote configuration using "data.terraform_remote_state.example.output.instance_id.value".
As far as I know, you have to run terraform per directory. In the same directory you can have multiple terraform files and use variables from file A in file B. You are currently splitting it in 2 directories, that is only possible with a module approach. And this does not work out-of-the-box.
This thread should clarify it.
I'm trying to use for_each with a terraform module creating datadog synthetic tests. The object names in an s3 bucket are listed and passed as the set for the for_each. The module reads the content of each file using the each.value passed in by the calling module as the key. I hardcoded the s3 object key value in the module during testing and it was working. When I attempt to call the module from main.tf, passing in the key name dynamically from the set it fails with the below error.
│ Error: Error in function call
│
│ on modules\Synthetics\trial2.tf line 7, in locals:
│ 7: servicedef = jsondecode(data.aws_s3_object.tdjson.body)
│ ├────────────────
│ │ data.aws_s3_object.tdjson.body is ""
│
│ Call to function "jsondecode" failed: EOF.
main.tf
data "aws_s3_objects" "serviceList" {
bucket = "bucketname"
}
module "API_test" {
for_each = toset(data.aws_s3_objects.serviceList.keys)
source = "./modules/Synthetics"
S3key = each.value
}
module
data "aws_s3_object" "tdjson" {
bucket = "bucketname"
key = var.S3key
}
locals {
servicedef = jsondecode(data.aws_s3_object.tdjson.body)
Keys = [for k,v in local.servicedef.Endpoints: k]
}
Any clues as to what's wrong here?
Thanks
Check out the note on the aws_s3_object data source:
The content of an object (body field) is available only for objects which have a human-readable Content-Type (text/* and application/json). This is to prevent printing unsafe characters and potentially downloading large amount of data which would be thrown away in favour of metadata.
Since it's successfully getting the data source (not throwing an error), but the body is empty, this is very likely to be your issue. Make sure that your S3 object has the Content-Type metadata set to application/json. Here's a Stack Overflow question/answer on how to do that via the CLI; you can also do it via the AWS console, API, or Terraform (if you created the object via Terraform).
EDIT: I found the other issue. Check out the syntax for using for_each with toset:
resource "aws_iam_user" "the-accounts" {
for_each = toset( ["Todd", "James", "Alice", "Dottie"] )
name = each.key
}
The important bit is that you should be using each.key instead of each.value.
My problem statement is simple but I am not able to find a solution anywhere on the internet.
I have users list as locals:
// users
locals {
allUsers = {
dev_user_1 = {
Name = "user1"
Email = "user1#abc.com"
GitHub = "user1" # github username
Team = "Dev"
}
devops_user_2 = {
Name = "user2"
Email = "user2#abc.com"
GitHub = "user2" # github username
Team = "DevOps"
}
product_user_3 = {
Name = "user3"
Email = "user3#abc.com"
Team = "Product"
}
}
}
These are the local tags that are being used for purposes of creating access to internal tools such as Github, Monitoring tools, etc.
Now, for the 2 users who belong to the Dev and DevOps team, they need access to Github ORG, while, the product user only needs access to some dashboards but not to Github, hence, the tag is missing.
How can I loop over the terraform resource github_membership to skip this product user (or simply anyone who does not have tag key GitHub?)
I am trying the following code, but no luck
// Send GitHub invite
resource "github_membership" "xyzTeam" {
for_each = local.allUsers
username = each.value.GitHub
role = "member"
}
Errors:
╷
│ Error: Unsupported attribute
│
│ on users.tf line 12, in resource "github_membership" "xyzTeam":
│ 12: username = each.value.GitHub
│ ├────────────────
│ │ each.value is object with 3 attributes
│
│ This object does not have an attribute named "GitHub".
What I did to solve this issue?
Set GitHub key for everyone but it's value as null.
Error:
╷
│ Error: "username": required field is not set
│
│ with github_membership.xyzTeam["user3"],
│ on users.tf line 10, in resource "github_membership" "xyzTeam":
│ 10: resource "github_membership" "devops" {
│
╵
If I left the value empty, errors:
Error: PATCH https://api.github.com/user/memberships/orgs/XYZ: 422 You can only update an organization membership's state to 'active'. []
for k, v in local.allUsers : k => v if v != ""
Same error because it tries to create the user with empty value still, and fails ultimately.
I cannot think of anything else. If someone can help to create separate locals from these existing locals, which creates the list of locals that grep the GitHub values, that hack would be super helpful.
You had the right idea with your third attempt, but the conditional logic in the for expression is slightly off. You need to use the can function instead:
{ for user, attributes in local.allUsers : user => attributes if can(attributes.GitHub) }
If the nested map contains a Github key, then can(attributes.Github) returns true, and the map constructor will contain the key-value pair. With this algorithm, you can construct a new map from the old map with the entries removed that do not contain a Github key in the nested map value.
I have recently created a cosmos database in Terraform and I am trying to pass its database connection string as a secret in keyvault, but when doing this I get the following error:
Error: Incorrect attribute value type │ │ on keyvault.tf line 282, in resource "azurerm_key_vault_secret" "Authentication_Server_Cosmos_DB_ConnectionString": │ 282: value = azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings │ ├──────────────── │ │ azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings has a sensitive value │ │ Inappropriate value for attribute "value": string required.
I have also tried to use the sensitive argument but key vault does not like that argument also I cant find any documentation on how to do this. On the Terraform website it just has it listed as an attribute you can call on.
My Terraform Secret code is bellow, I wont put all my code in here as Stack overflow doesn't like the amount of code that I have.
So please presume, I am using the latest Azurerm agent, and all the rest of my code is correct its just the secret part that's not working.
resource "azurerm_key_vault_secret" "Authentication_Server_Cosmos_DB_ConnectionString" { //Auth Server Cosmos Connection String Secret
name = "AuthenticationServerCosmosDBConnectionString"
value = azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings
key_vault_id = azurerm_key_vault.nscsecrets.id
depends_on = [
azurerm_key_vault_access_policy.client,
azurerm_key_vault_access_policy.service_principal,
azurerm_cosmosdb_account.nsauthsrvcosmosdb,
]
}
There are 4 connection Strings inside the value that you have given and also the values are of type secure_string . So you need to convert them to String Value and apply index for which value you want to store in the keyvault.
For Storing all the the 4 Connection Strings you can use below :
resource "azurerm_key_vault_secret" "example" {
count = length(azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings)
name = "AuthenticationServerCosmosDBConnectionString-${count.index}"
value = tostring("${azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings[count.index]}")
key_vault_id = azurerm_key_vault.example.id
}
Outputs:
If you want to store only one connection string then you can use index as per your requirement (for example : if you want to store the first connection_string then use '0' as index and like wise 1/2/3 .) in the below code:
resource "azurerm_key_vault_secret" "example1" {
name = "AuthenticationServerCosmosDBConnectionString"
value = tostring("${azurerm_cosmosdb_account.nsauthsrvcosmosdb.connection_strings[0]}")
key_vault_id = azurerm_key_vault.example.id
}
Outputs:
I am using Key Protect on IBM Cloud. I want to import an existing root key into my Key Protect instance using Terraform. I am following the documentation for ibm_kms_key:
data "ibm_resource_instance" "kms_instance" {
name = "henrikKeyProtectUS"
service = "kms"
location = "us-south"
}
resource "ibm_kms_key" "key" {
instance_id = data.ibm_resource_instance.kms_instance.guid
key_name = "mytestkey"
standard_key = false
payload = "rtmETw5IrxFIkRjl7ZYIxMs5Dk/wWQLJ+eQU+HSrWUo="
}
While applying the changes, Terraform returns with an error:
ibm_kms_key.key: Creating...
╷
│ Error: Error while creating Root key with payload: kp.Error: correlation_id='618f8712-b357-xxx-af12-155ad18fbc26', msg='Unauthorized: The user does not have access to the specified resource'
│
│ with ibm_kms_key.key,
│ on main.tf line 7, in resource "ibm_kms_key" "key":
│ 7: resource "ibm_kms_key" "key" {
Why? I am the account owner and Key Protect instance administrator. I should have all the privileges.
The error is actually described in the introduction to ibm_kms_key, but easily overread. The set region for the provider currently has to match the region of the KMS instance.
After switching my provider from "eu-de" to "us-south", too, I was able to import the key.
provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
region = "us-south"
ibmcloud_timeout = var.ibmcloud_timeout
}