Can't HCP Vault Admin Access Token access child namespace? - terraform

Here's my HCP Vault Namespace structure:
admin/
- terraform-modules-global/
Here, I want to create a child namespace of the parent namespace using the Vault Admin Token.
admin/
- terraform-modules-global/
- global/
But I get the following error:
Error: error writing to Vault: Error making API request. Namespace: terraform-modules-global/terraform-modules URL: PUT https://HCP_VAULT_URL:8200/v1/sys/namespaces/global Code: 403. Errors: * 1 error occurred: * permission denied
I am trying this through Terraform Cloud.
data "tfe_outputs" "hcp-vault" {
organization = "nftbank"
workspace = "hcp-vault-global"
}
provider "vault" {
address = data.tfe_outputs.hcp-vault.values.vault_public_endpoint
token = data.tfe_outputs.hcp-vault.values.vault_admin_token # This was created via the hcp_vault_cluster_admin_token resource.
}
locals {
terraform-modules = {
environments = [
"global",
]
}
}
resource "vault_namespace" "terraform-modules" {
path = "terraform-modules"
}
# Try option 1
resource "vault_namespace" "terraform-modules" {
for_each = toset(local.terraform-modules.environments)
path = "terraform-modules-${each.value}"
namespace = "terraform-modules"
}
# Try option 2
provider "vault" {
address = data.tfe_outputs.hcp-vault.values.vault_public_endpoint
token = data.tfe_outputs.hcp-vault.values.vault_admin_token # This was created via the hcp_vault_cluster_admin_token resource.
namespace = "terraform-modules"
alias = "terraform-modules"
}
resource "vault_namespace" "terraform-modules" {
for_each = toset(local.terraform-modules.environments)
path = "terraform-modules-${each.value}"
provider = vault.terraform-modeuls
}
Both options fail. terraform-modules namespace is created normally, but child namespaces are not created.

Related

How to share terraform resources between modules?

I realised that terraform modules are recreating its resources per module declaration. So the way to reference a resource created in a module can only be done from the module, if it's defined as output. I'm looking for a way where I can reuse a module not in the way so it's recreating resources.
Imagine a scenario where I have three terraform modules.
One is creating an IAM policy (AWS), second is creating an IAM role, third is creating a different IAM role, and both roles share the same IAM policy.
In code:
# policy
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = {} # just to shorten demonstration
}
output "policy" {
value = aws_iam_policy.secrets_manager_read_policy
}
# test-role-1
resource "aws_iam_role" "test_role_1" {
name = "test-role-1"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_1.name
policy_arn = module.policy.policy.arn
}
# test-role-2
resource "aws_iam_role" "test_role_2" {
name = "test-role-2"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
module "policy" {
source = "../test-policy"
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = module.policy.policy.arn
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
}
In this scenario terraform is trying to create two policies for each user, but I want them to use the same resource.
Is there a way to keep the code clean, so not all resources are in the same file so that a resource is identified, and the same resource can be used in multiple modules? Or it's a tree like structure where sibling modules cannot share the same child? Yes, I could define the policy first, and pass down the properties needed to child modules where I create the users, but what if I want to have a many to many relationship between them so multiple roles share the same multiple policies?
I can think of a few ways to do this:
Option 1: Move the use of the policy module up to the parent level, and have your parent (root) Terraform code look like this:
# create-policy
module "my-policy" {
source = "../../../modules/resources/policy"
}
# create-roles
module "role-1" {
source = "../../../modules/resources/test-role-1"
policy = module.my-policy.policy
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.my-policy.policy
}
Option 2: Output the policy from the role modules, and also make it an optional input variable of the modules:
variable "policy" {
default = null # Make the variable optional
}
module "policy" {
# Create the policy, only if one wasn't passed in
count = var.policy == null ? 1 : 0
source = "../test-policy"
}
locals {
# Create a variable with the value of either the passed-in policy,
# or the one we are creating
my-policy = var.policy == null ? module.policy[0].policy : var.policy
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_role" {
role = aws_iam_role.test_role_2.name
policy_arn = local.my-policy
}
output "policy" {
value = locals.my-policy
}
Then your root code could look like this:
module "role-1" {
source = "../../../modules/resources/test-role-1"
}
module "role-2" {
source = "../../../modules/resources/test-role-2"
policy = module.role-1.policy
}
The first module wouldn't get an input, so it would create a new policy. The second module would get an input, so it would use it instead of re-creating the policy.
I also highly recommend looking at the source code for some of the official AWS Terraform modules, like this one. Reading the source code for those really helped me understand how to create reusable Terraform modules.

Terraform + Databricks error ENDPOINT_NOT_FOUND: Unsupported path:

I am wondering if someone already encountered this error I am getting when trying to create OBO Tokens for Databricks Service Principals.
When setting up the databricks_permissions I get:
Error: ENDPOINT_NOT_FOUND: Unsupported path: /api/2.0/accounts/< my account >/scim/v2/Me for account: < my account >
My code is really no different from what you see in the documentation: https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/obo_token
variable "principals" {
type = list(
object({
name = string
active = bool
})
)
}
resource "databricks_service_principal" "sp" {
count = length(var.principals)
display_name = "${var.prefix}-${var.principals[count.index].name}"
active = var.principals[count.index].active
workspace_access = var.principals[count.index].active
databricks_sql_access = var.principals[count.index].active
allow_cluster_create = false
allow_instance_pool_create = false
}
resource "databricks_permissions" "token_usage" {
count = length(var.principals)
authorization = "tokens"
access_control {
service_principal_name = databricks_service_principal.sp[count.index].application_id
permission_level = "CAN_USE"
}
}
The Service Principals are created as expected, but then databricks_permissions throws the odd error.
Fixed.
The issue was that I was trying to provision databricks_permissions with the same Databricks provider I used to create the workspace.
After creating the workspace, creating a new provider with that new workspace token fixed the issue
So, first one has to create the workspace with the normal provider:
provider "databricks" {
alias = "mws"
host = "https://accounts.cloud.databricks.com"
username = < ... >
password = < ... >
account_id = < ... >
}
Then, configure a new provider using that workspace:
provider "databricks" {
alias = "workspace"
host = module.databricks-workspace.databricks_host
token = module.databricks-workspace.databricks_token
}

How does one create a service account and set it as IAM user in CloudSQL with terraform

Deploying a Postgres DB on cloudsql via terraform I want to have a service account as a user.
The documentation examples only show individual users. Following that example using email address, I get repeated error messages about the name being too long or email address invalid/wrong pattern.
resource "google_sql_database_instance" "master" {
project = var.project
deletion_protection = false
name = "demo"
database_version = "POSTGRES_14"
settings {
tier = "db-f1-micro"
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
}
}
resource "google_sql_user" "iam_user" {
name = "codeangler#example.com"
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_USER"
}
resource "google_sql_user" "iam_sa_user" {
name = google_service_account.custom_cloudsql_sa.name
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
resource "google_project_iam_member" "iam_user_cloudsql_instance_user" {
project = var.project
role = "roles/cloudsql.instanceUser"
member = format("user:%s", google_sql_user.iam_user.name)
}
resource "google_service_account" "custom_cloudsql_sa" {
account_id = var.project
}
resource "google_service_account_iam_member" "impersonation_sa" {
service_account_id = google_service_account.custom_cloudsql_sa.name
role = "roles/iam.serviceAccountUser"
member = format("user:%s", google_sql_user.iam_user.name)
}
error message
Error: Error, failed to insert user yetanothercaseyproject-c268#yetanothercaseyproject.iam.gserviceaccount.com into instance demo: googleapi: Error 400: Invalid request: User name "yetanothercodeanglerproject-c268#yetanothercodeanglerproject.iam.gserviceaccount.com" to be created is too long (max 63).., invalid
│ with google_sql_user.iam_sa_user,
│ on main.tf line 60, in resource "google_sql_user" "iam_sa_user":
│ 60: resource "google_sql_user" "iam_sa_user" {
│
or changing the recourse to use email give new error
resource "google_sql_user" "iam_sa_user" {
name = google_service_account.custom_cloudsql_sa.email
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
Error: Error, failed to insert user aixjyznd#yetanothercodeanglerproject.iam.gserviceaccount.com into instance demo: googleapi: Error 400: Invalid request: Database username for Cloud IAM service account should be created without ".gserviceaccount.com" suffix., invalid
use the key account_id and not name nor email
resource "google_sql_user" "iam_sa_user" {
name = google_service_account.custom_cloudsql_sa.account_id
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
According to Add an IAM user or service account to the database and my experience you should omit the .gserviceaccount.com suffix in the account email.
Sample code:
resource "google_sql_user" "iam_sa_user" {
name = replace(google_service_account.custom_cloudsql_sa.email, ".gserviceaccount.com", "") // "sa-test-account-01#prj-test-stg.iam"
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}

Vault kv secrets and nomad jobs

I am creating a nomad job that accesses vault kv secrets. At the moment I managed to create the policies, and a role, but I can't make it consume the secret.
This would be my nomad job:
job "http-echo" {
datacenters = ["ikerdc2"]
group "echo" {
count = 1
task "server" {
driver = "docker"
vault {
policies = ["access-tables"]
}
template {
data = <<EOT
{{ with secret "kv/me" }}
NAME ="{{ .Data.data.name }}"
{{ end }}
EOT
destination = "echo.env"
env = true
}
config {
image = "hashicorp/http-echo:latest"
args = [
"-listen", ":8080",
"-text", "Hello World!",
]
}
resources {
network {
mbits = 10
port "http" {
static = 8080
}
}
}
service {
name = "http-echo"
port = "http"
tags = [
"urlprefix-/http-echo",
]
}
}
}
}
I have created a vault server with the command vault server -dev
I have a kv secret named "me" and inside it's just like
{
"name" = "Hello Iker"
}
And the policies are like this:
# Allow creating tokens under "nomad-cluster" role. The role name should be
# updated if "nomad-cluster" is not used.
path "auth/token/create/nomad-cluster" {
capabilities = ["update"]
}
# Allow looking up "nomad-cluster" role. The role name should be updated if
# "nomad-cluster" is not used.
path "auth/token/roles/nomad-cluster" {
capabilities = ["read"]
}
# Allow looking up the token passed to Nomad to validate the token has the
# proper capabilities. This is provided by the "default" policy.
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow looking up incoming tokens to validate they have permissions to access
# the tokens they are requesting. This is only required if
# `allow_unauthenticated` is set to false.
path "auth/token/lookup" {
capabilities = ["update"]
}
# Allow revoking tokens that should no longer exist. This allows revoking
# tokens for dead tasks.
path "auth/token/revoke-accessor" {
capabilities = ["update"]
}
# Allow checking the capabilities of our own token. This is used to validate the
# token upon startup.
path "sys/capabilities-self" {
capabilities = ["update"]
}
# Allow our own token to be renewed.
path "auth/token/renew-self" {
capabilities = ["update"]
}
path "kv/*" {
capabilities = ["create", "update", "read"]
}
And the role is like this:
{
"allowed_policies": "access-tables",
"token_explicit_max_ttl": 0,
"name": "nomad-cluster",
"orphan": true,
"token_period": 259200,
"renewable": true
}
This are the errors I get when I run the job:
Missing: vault.read(kv/me) Template failed: vault.read(kv/me):
vault.read(kv/me): Error making API request. URL: GET
http://127.0.0.1:8200/v1/kv/me Code: 403. Errors: * 1 error occurred:
permission denied
If someone could help me with that it would be great, thanks
In your post, I don't see the contents of the access-tables policy itself. That ACL policy must have the following rules:
path "kv/data/me" {
capabilities = ["read"]
}
The permissions that the nomad-cluster policy from the documentation requires are for Nomad to create tokens in Vault for the policies that you list in the vault stanzas of your Nomad jobs. Adding the ability for that policy to read KV will not help. Instead, the policy in the vault stanza in your job, access-tables, needs that permission.

terraform wants 'valid credentials' for cloudflare but any arguments i add in my main.tf respond 'unsupported argument'

I had terraform and cloudfront working locally. Now that I have tried to add a github action, I am unable to run 'terraform plan' successfully. It passes locally and fails in github actions.
│ Error: credentials are not set correctly
│
│ with provider["registry.terraform.io/cloudflare/cloudflare"],
│ on main.tf line 29, in provider "cloudflare":
│ 29: provider "cloudflare" {
My main file before my changes looked like this:
provider "aws" {
region = var.aws_region
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "aws_s3_bucket" "site" {
bucket = var.site_domain
acl = "public-read"
website {
index_document = "index.html"
error_document = "index.html"
}
}
resource "aws_s3_bucket" "www" {
bucket = "www.${var.site_domain}"
acl = "private"
policy = ""
website {
redirect_all_requests_to = "https://${var.site_domain}"
}
}
resource "aws_s3_bucket_policy" "public_read" {
bucket = aws_s3_bucket.site.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "PublicReadGetObject"
Effect = "Allow"
Principal = "*"
Action = "s3:GetObject"
Resource = [
aws_s3_bucket.site.arn,
"${aws_s3_bucket.site.arn}/*",
]
},
]
})
}
data "cloudflare_zones" "domain" {
filter {
name = var.site_domain
}
}
resource "cloudflare_record" "site_cname" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = var.site_domain
value = aws_s3_bucket.site.website_endpoint
type = "CNAME"
ttl = 1
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = "www"
value = var.site_domain
type = "CNAME"
ttl = 1
proxied = true
}
My terraform.tfvars file looks like this:
aws_region = "us-east-1"
aws_access_key_id = <my awsaccesskeyid>
aws_secret_key = <my awssecretkey>
site_domain = <my domain name>
cloudflare_api_token=<mytoken>
My variables.tf looked like:
variable "aws_region" {
type = string
description = "The AWS region to put the bucket into"
default = "us-east-1"
}
variable "site_domain" {
type = string
description = "The domain name to use for the static site"
default = "<my website name>.net"
}
variable "cloudflare_api_token" {
type = string
description = "The cloudflare Api key"
default = null
}
I ran CLOUDFLARE_API_TOKEN=<my token>
Everything worked until I tried following hashicorps tutorial here. When my first github action ran, terraform plan failed with this error:
``
Error: credentials are not set correctly
with provider["registry.terraform.io/cloudflare/cloudflare"],
on main.tf line 29, in provider "c
To get past the cloudflare error, I have tried:
1. adding my cloudflare api token to terraform.tfvar
2. setting my email, token in main in the cloudflare_provider block a variety of ways including calling the terraform.tfvar value
3. adding my cloudflare token to variables.tf
4. adding my cloudflare token to environmental variables in my terraform cloud
5. adding my cloudflare token, aws keys to github as a secret
any time i try to pass anything new in the provider {} blocks, I get 'unsupported argument' errors
Found the fix: I added an extra block in my github workflow/terraform.yml file:
ENV_NAME: prod
AWS_ACCESS_KEY_ID: $${{secrets.AWSACCESSKEY}}
AWS_SECRET_ACCESS_KEY: $${{secrets.AWSSECRETACCESSKEY}}
CLOUDFLARE_API_TOKEN:$${{secrets.CLOUDFLARE_API_TOKEN}}

Resources