I am wondering if someone already encountered this error I am getting when trying to create OBO Tokens for Databricks Service Principals.
When setting up the databricks_permissions I get:
Error: ENDPOINT_NOT_FOUND: Unsupported path: /api/2.0/accounts/< my account >/scim/v2/Me for account: < my account >
My code is really no different from what you see in the documentation: https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/obo_token
variable "principals" {
type = list(
object({
name = string
active = bool
})
)
}
resource "databricks_service_principal" "sp" {
count = length(var.principals)
display_name = "${var.prefix}-${var.principals[count.index].name}"
active = var.principals[count.index].active
workspace_access = var.principals[count.index].active
databricks_sql_access = var.principals[count.index].active
allow_cluster_create = false
allow_instance_pool_create = false
}
resource "databricks_permissions" "token_usage" {
count = length(var.principals)
authorization = "tokens"
access_control {
service_principal_name = databricks_service_principal.sp[count.index].application_id
permission_level = "CAN_USE"
}
}
The Service Principals are created as expected, but then databricks_permissions throws the odd error.
Fixed.
The issue was that I was trying to provision databricks_permissions with the same Databricks provider I used to create the workspace.
After creating the workspace, creating a new provider with that new workspace token fixed the issue
So, first one has to create the workspace with the normal provider:
provider "databricks" {
alias = "mws"
host = "https://accounts.cloud.databricks.com"
username = < ... >
password = < ... >
account_id = < ... >
}
Then, configure a new provider using that workspace:
provider "databricks" {
alias = "workspace"
host = module.databricks-workspace.databricks_host
token = module.databricks-workspace.databricks_token
}
Related
I'm trying to create a secret on GCP's Secret Manager.
The secret value is coming from Vault (HCP Cloud).
How can I pass a value of the secret if I'm using a .tfvars file for the values?
Creating the secret without .tfvars works. Other suggestions rather than data source are welcomed as well. I saw that referring locals isn't possible as well inside tfvars.
vault.tf:
provider "vault" {
address = "https://testing-vault-public-vault-numbers.numbers.z1.hashicorp.cloud:8200"
token = "someToken"
}
data "vault_generic_secret" "secrets" {
path = "secrets/terraform/cloudcomposer/kafka/"
}
main.tf:
resource "google_secret_manager_secret" "connections" {
provider = google-beta
count = length(var.connections)
secret_id = "${var.secret_manager_prefix}-${var.connections[count.index].name}"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "connections-version" {
count = length(var.connections)
secret = google_secret_manager_secret.connections[count.index].id
secret_data = var.connections[count.index].uri
}
dev.tfvars:
image_version = "composer-2-airflow-2.1.4"
env_size = "LARGE"
env_name = "development"
region = "us-central1"
network = "development-main"
subnetwork = "development-subnet1"
secret_manager_prefix = "test"
connections = [
{ name = "postgres", uri = "postgresql://postgres_user:XXXXXXXXXXXX#1.1.1.1:5432/"}, ## This one works
{ name = "kafka", uri = "${data.vault_generic_secret.secrets.data["kafka_dev_password"]}"
]
Getting:
Error: Invalid expression
on ./tfvars/dev.tfvars line 39:
Expected the start of an expression, but found an invalid expression token.
Thanks in advance.
Values in the tfvars files have to be static, i.e., they cannot use any kind of a dynamic assignment like when using data sources. However, in that case, using local variables [1] should be a viable solution:
locals {
connections = [
{
name = "kafka",
uri = data.vault_generic_secret.secrets.data["kafka_dev_password"]
}
]
}
Then, in the resource you need to use it in:
resource "google_secret_manager_secret" "connections" {
provider = google-beta
count = length(local.connections)
secret_id = "${var.secret_manager_prefix}-${local.connections[count.index].name}"
replication {
automatic = true
}
}
resource "google_secret_manager_secret_version" "connections-version" {
count = length(local.connections)
secret = google_secret_manager_secret.connections[count.index].id
secret_data = local.connections[count.index].uri
}
[1] https://developer.hashicorp.com/terraform/language/values/locals
I’m trying to create data proc cluster in GCP using terraform resource google_dataproc_cluster. I would like to create Component gateway along with that. Upon seeing the documentation, it has been stated as to use the below snippet for creation:
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
Upon running the terraform plan, i see the error as " Error: Unsupported block type". And also tried using the override_properties and in the GCP data proc, i could see that the property is enabled, but still the Gateway Component is disabled. Wanted to understand, is there an issue upon calling the one given in the Terraform documentation and also is there an alternate for me to use it what?
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true"
}
}
The below is the error while running the terraform apply.
Error: Unsupported block type
on main.tf line 35, in resource "google_dataproc_cluster" "dataproc_cluster":
35: endpoint_config {
Blocks of type "endpoint_config" are not expected here.
RESOURCE BLOCK:
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "${var.cluster_name}"
region = "${var.region}"
graceful_decommission_timeout = "120s"
labels = "${var.labels}"
cluster_config {
staging_bucket = "${var.staging_bucket}"
/*endpoint_config {
enable_http_port_access = "true"
}*/
software_config {
image_version = "${var.image_version}"
override_properties = {
"dataproc:dataproc.allow.zero.workers" = "true"
"dataproc:dataproc.enable_component_gateway" = "true" /* Has Been Added as part of Component Gateway Enabled which is already enabled in the endpoint_config*/
}
}
gce_cluster_config {
// network = "${var.network}"
subnetwork = "${var.subnetwork}"
zone = "${var.zone}"
//internal_ip_only = true
tags = "${var.network_tags}"
service_account_scopes = [
"cloud-platform"
]
}
master_config {
num_instances = "${var.master_num_instances}"
machine_type = "${var.master_machine_type}"
disk_config {
boot_disk_type = "${var.master_boot_disk_type}"
boot_disk_size_gb = "${var.master_boot_disk_size_gb}"
num_local_ssds = "${var.master_num_local_ssds}"
}
}
}
depends_on = [google_storage_bucket.dataproc_cluster_storage_bucket]
timeouts {
create = "30m"
delete = "30m"
}
}
Below is the snippet that worked for me to enable component gateway in GCP
provider "google-beta" {
project = "project_id"
}
resource "google_dataproc_cluster" "dataproc_cluster" {
name = "clustername"
provider = google-beta
region = us-east1
graceful_decommission_timeout = "120s"
cluster_config {
endpoint_config {
enable_http_port_access = "true"
}
}
This issue is discussed in this Git thread.
You can enable the component gateways in Cloud Dataproc by using google-beta provider in the Dataproc cluster and root configuration of terraform.
sample configuration:
# Terraform configuration goes here
provider "google-beta" {
project = "my-project"
}
resource "google_dataproc_cluster" "mycluster" {
provider = "google-beta"
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
labels = {
foo = "bar"
}
...
...
}
Deploying a Postgres DB on cloudsql via terraform I want to have a service account as a user.
The documentation examples only show individual users. Following that example using email address, I get repeated error messages about the name being too long or email address invalid/wrong pattern.
resource "google_sql_database_instance" "master" {
project = var.project
deletion_protection = false
name = "demo"
database_version = "POSTGRES_14"
settings {
tier = "db-f1-micro"
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
}
}
resource "google_sql_user" "iam_user" {
name = "codeangler#example.com"
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_USER"
}
resource "google_sql_user" "iam_sa_user" {
name = google_service_account.custom_cloudsql_sa.name
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
resource "google_project_iam_member" "iam_user_cloudsql_instance_user" {
project = var.project
role = "roles/cloudsql.instanceUser"
member = format("user:%s", google_sql_user.iam_user.name)
}
resource "google_service_account" "custom_cloudsql_sa" {
account_id = var.project
}
resource "google_service_account_iam_member" "impersonation_sa" {
service_account_id = google_service_account.custom_cloudsql_sa.name
role = "roles/iam.serviceAccountUser"
member = format("user:%s", google_sql_user.iam_user.name)
}
error message
Error: Error, failed to insert user yetanothercaseyproject-c268#yetanothercaseyproject.iam.gserviceaccount.com into instance demo: googleapi: Error 400: Invalid request: User name "yetanothercodeanglerproject-c268#yetanothercodeanglerproject.iam.gserviceaccount.com" to be created is too long (max 63).., invalid
│ with google_sql_user.iam_sa_user,
│ on main.tf line 60, in resource "google_sql_user" "iam_sa_user":
│ 60: resource "google_sql_user" "iam_sa_user" {
│
or changing the recourse to use email give new error
resource "google_sql_user" "iam_sa_user" {
name = google_service_account.custom_cloudsql_sa.email
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
Error: Error, failed to insert user aixjyznd#yetanothercodeanglerproject.iam.gserviceaccount.com into instance demo: googleapi: Error 400: Invalid request: Database username for Cloud IAM service account should be created without ".gserviceaccount.com" suffix., invalid
use the key account_id and not name nor email
resource "google_sql_user" "iam_sa_user" {
name = google_service_account.custom_cloudsql_sa.account_id
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
According to Add an IAM user or service account to the database and my experience you should omit the .gserviceaccount.com suffix in the account email.
Sample code:
resource "google_sql_user" "iam_sa_user" {
name = replace(google_service_account.custom_cloudsql_sa.email, ".gserviceaccount.com", "") // "sa-test-account-01#prj-test-stg.iam"
instance = google_sql_database_instance.master.name
type = "CLOUD_IAM_SERVICE_ACCOUNT"
}
So the idea is to create resource health alert for multiple storage accounts using terraform. It's fairly simple for one storage account as one would pass the value
output "id" {
description = "Id of the storage account created."
value = azurerm_storage_account.storage.id
}
to the resource_id or hardcode the resource id with the resource actual id. But my ask here is how I can setup one single alert block for all the storage accounts provisioned by the terraform. I have been trying to use the above data block but the name variable would take only strings. Please provide a sample example as to how you would do it.
locals {
activity_log_alerts = {
resource_health_alerts= {
environment = var.environment
resource_group_name = var.rgp
enabled = "true"
** scopes = module.main.storage_account_name["storage_name"] **
alert_name = “Resource health alert for storage accounts”
description = format(“The state of the azure resource is unknown”}
category = "ResourceHealth"
level = "Critical"
operation_name = null
resource_health = [
{
current = ["Unknown"]
previous = ["Available"]
reason = ["PlatformInitiated"]
}
]
The error I received is this: Error: Null value found in list
scopes = tolist([var.resource_id])
UPDATE: Another Approach With this approach I was hoping to get all the resource under the same RG and subscription but got the same error
data "azurerm_subscription" "current" {
subscription_id = var.subscription_id
}
locals {
activity_log_alerts = {
resource_health_alerts= {
environment = var.environment
resource_group_name = var.rgp
enabled = "true"
scopes = [data.azurerm_subscription.current.id]
alert_name = “Resource health alert for storage accounts”
description = format(“The state of the azure resource is unknown”}
category = "ResourceHealth"
level = "Critical"
operation_name = null
resource_health = [
{
current = ["Unknown"]
previous = ["Available"]
reason = ["PlatformInitiated"]
}
]
The error I received is this: Error: Null value found in list
scopes = tolist([var.resource_id])
I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}