Terraform | Retrieve the (client-key) certificate from Cloud SQL - terraform

I would like to retrieve the client-key SSL key of cloudsql via Terraform, I was able to retrieve the server-ca and the client-cert via terraform but have no idea how to get the client-key file. TO retrieve the client-cert I have used the below mentioned point: Please look.
resource "google_sql_ssl_cert" "client_cert" {
depends_on = ["google_sql_database_instance.new_instance_sql_master",
"google_sql_user.users"]
common_name = "terraform1"
project = "${var.project_id}"
instance ="${google_sql_database_instance.new_instance_sql_master.name}"
}
Output.tf
output "client_cert" {
value = "${google_sql_ssl_cert.client_cert.0.cert}"
description = "The CA Certificate used to connect to the SQL Instance via
SSL"
}
Please let me know how can I retrieve the client-key private key. i.e server-ca, client-cert and I need client-key via terraform.

In order to get the client private key, use the following snippet with any other parameters you wish to have:
output "client_privkey" {
value = "${google_sql_ssl_cert.client_cert.*.private_key}"
}
For client-certificate: value = "${google_sql_ssl_cert.client_cert.*.cert}"
For server certificate: value = ${google_sql_ssl_cert.client_cert.*.server_ca_cert}"

Related

Onepassword_item error terraform plan "status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key"

I have a kubernetes cluster running on google cloud. This cluster has a op_connect_server up and running and I am trying to use terraform to create the items on some specific vaults.
To be able to run it locally, I am port-forwarding the port 8080 to my kubernetes op_connect_server pod. (
kubectl get pods -A |grep onepassword-connect |grep -v operator |awk '{print $2}' 8080:8080 -n tools)
My kubernetes cluster is a private one with a public address attached to it. To run it locally, I am accessing it's public address, and to run it on gitlab, I am accessing it's private address (Because my gitlab pipeline machine is running from inside kubernetes cluster and has access to it's private address. It works for other features)
When I run it locally, everything works well. The items are created on vault without any problems, and also during the terraform plan it can connect to the op_connect_server and check the items without any error.
On my terraform provider for one_password I am setting the token and the op_connect_server address.
When I run it on my pipeline (gitlab), I get the error: status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key.
This error happens during terraform plan, when checking for some onepassword_item. I tried to retrieve the same information using curl and I am able to do it, but for some reason, it fails on terraform.
I already checked/tried:
Check all variables like token, op_connect server address, vault id and they are the same on both (local and gitlab)
Tried using the same cluster endpoint (public one) when running locally and from gitlab
Delete the cluster and create/run everything from the gitlab pipeline.
The creation process works (op_connect_server, all items are created and so on) but when I run it again, it fails with the same error message.
This is my code for creating the items:
resource "onepassword_item" "credentials" {
vault = ""
title = "Redis Database cache"
category = "database"
type = "other"
username = ""
database = "Redis Database"
hostname = module.beta_redis.database_host_access_private
port = module.beta_redis.database_host_access_port
password = module.beta_redis.auth_string
section {
label = "TLS"
field {
label = "tls_cert"
value = module.beta_redis.tls_cert
type = "CONCEALED"
}
field {
label = "tls_transit_encryption_mode"
value = module.beta_redis.tls_transit_encryption_mode
type = "CONCEALED"
}
field {
label = "tls_sha1_fingerprint"
value = module.beta_redis.tls_sha1_fingerprint
type = "CONCEALED"
}
}
My op_connect_server has these settings:
set {
name = "connect.credentials_base64"
value = data.local_file.input.content_base64
type = "string"
}
set {
name = "connect.serviceType"
value = "NodePort"
}
set {
name = "operator.create"
value = "true"
}
set {
name = "operator.autoRestart"
value = "true"
}
set {
name = "operator.clusterRole.create"
value = "true"
}
set {
name = "operator.roleBinding.create"
value = "true"
}
set {
name = "connect.api.name"
value = "beta-connect-api"
}
set {
name = "operator.token.value"
value = var.op_token_beta
}
My one password version is:
1.1.4
Does someone have any clue why this could be happening, or how can I debug it?

add organization to subject field with terraform's vault provider

I'm trying to provision a kubernetes cluster by creating all the certificates through vault first. It somehow makes it easy in the context of terraform, because I can insert all this information in the cloudinit config, so I don't have to rely on a node being ready and then transfer data from one to another.
In any case, the problem that I have is that vault_pki_secret_backend_cert doesn't seem to support any change to the subject field except for common_name (https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/pki_secret_backend_cert), whereas kubernetes relies on these types of certificates where the organization is specified. For example:
Subject: O = system:masters, CN = kube-etcd-healthcheck-client
I'm generating these certificates by directly using vault's intermediate certificate, so the private key is in vault. I cannot generate them separately, and I wouldn't want that anyway, because I'm trying to provision basically everything using terraform.
Any ideas how I can get around this issue?
I was able to find out the answer eventually. The only way to do this with terraform/vault seems to be configuring the backend role and add the organization parameter in that role:
https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/cert_auth_backend_role.
For example, you define the role:
resource "vault_pki_secret_backend_role" "etcd_ca_clients" {
depends_on = [ vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca ]
backend = vault_mount.kube1_etcd_ca.path
name = "kubernetes-client"
ttl = 3600
allow_ip_sans = true
key_type = "ed25519"
allow_any_name = true
allowed_domains = ["*"]
allow_subdomains = true
organization = [ "system:masters" ]
}
And here you tell vault to generate the certificate based on that role:
resource "vault_pki_secret_backend_cert" "etcd_healthcheck_client" {
for_each = { for k, v in var.kubernetes_servers : k => v if startswith(k, "etcd-") }
depends_on = [vault_pki_secret_backend_role.etcd_ca_clients]
backend = vault_mount.kube1_etcd_ca.path
name = vault_pki_secret_backend_role.etcd_ca_clients.name
common_name = "kube-etcd-healthcheck-client"
}
The limitation makes no sense whatsoever to me, but if you don't a bulk of very different certificates, it's not all too bad and you don't have to repeat a lot of code.

Trim end of queue primary_connection_string and send to keyvault using terraform

I am able to store full primary_connection_string to keyvault for service bus queue in Azure using terraform. But not able to store the same value without ;EntityPath=*********
Original Connection String : Endpoint=sb://****.servicebus.windows.net/;SharedAccessKeyName=;SharedAccessKey=;EntityPath=*****
Required connection string to store in keyvault: Endpoint=sb://****.servicebus.windows.net/;SharedAccessKeyName=;SharedAccessKey=
I tried below code using replace but it did not worked. Its directly storing the string "azurerm_servicebus_queue_authorization_rule.que-referee-sr-lr.primary_connection_string". I need value as defined above:
resource "azurerm_key_vault_secret" "que-referee-sr-lr-connectionstring" {
name = lower(format("%s-%s", azurerm_servicebus_queue_authorization_rule.que-referee-sr-lr.name, "primary-connection-string"))
value = replace("azurerm_servicebus_queue_authorization_rule.que-referee-sr-lr.primary_connection_string", "/;EntityPath.*", "")
key_vault_id = data.azurerm_key_vault.PlatformKV.id
}

How to get Vault secret through Terraform

I have created the key/secret pair in Vault UI. Trying to get the Vault's secret through Terraform.
Please share thoughts!
You need to define a vault provider, and fetch it as a data object. Here's a simple example:
provider "vault" {
address = "https://my-vault-address.com"
skip_tls_verify = true
token = "xxx"
}
data "vault_generic_secret" "my_secret" {
path = "secret/path/to/mysecret"
}
Then in order to use it:
...
pass = data.vault_generic_secret.my_secret.data["password"]
...

Add variable to data

Im trying to concatenate a data with variable (Terraform v0.12.0):
variable "my_var" {
default = "secret_string"
}
auth_token = data.external.get_secret.result.var.my_var
It's work in case of:
auth_token = data.external.get_secret.result.secret_string
As I can see I can't to add variable to data. Do we have any workaround for this case? Thanks
Not very clear for what you need. Let me guess.
you can have different endpoints to save your secrets, such as hashicorp vault, aws ssm, aws secret managers, with these ways, you can avoid to save secrets directly in source code.
Here I use aws ssm for an example on how to reference a secret via variable.
Suppose you have set SSM key my_var in aws.
variable "my_var" {
default = "my_var"
}
data "aws_ssm_parameter" "read" {
name = "${var.my_var}"
}
So now you can easily reference it
auth_token = "${aws_ssm_parameter.read.value}"
Note: The unencrypted value of a SecureString will be stored in the raw state as plain-text.
Note: The data source is currently following the behavior of the SSM API to return a string value, regardless of parameter type. For type StringList, we can use the built-in split() function to get values in a list. Example: split(",", data.aws_ssm_parameter.subnets.value)

Resources