In our case we are doing the following:
1. Infra Agent
a. We create a KV
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
c. We store the username and password as secrets in the newly created KV
2. Data Agent
a. We want to deploy the DDL from the repos onto the SQL Database we created in Infra Agent. We need to use the SQL database username and password stored in the KV to do so
b. In order to read the secrets from the KV our current thinking is to insert the username and password to pipeline parameters in step 1 (i.e. setting them at runtime) so we can reuse the values across other Agents.
A couple of questions:
- Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
- What is best practice to access the Database username and password in other Agents, given that:
o We can’t use variable groups because the KV and values won’t be known until runtime
o We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
If you're using Key Vault, then I assume you're talking about Azure SQL Databases. However at the moment Terraform only supports assigning a administrator username and password for the SQL Server instance, not SQL databases.
In this case, I recommend using random_password resources to assign values to azurerm_key_vault_secret which can then be assigned as the azurerm_sql_server administrator password.
With this setup you know for certain that the password in Key Vault is always in sync, and can be treated as the source of truth for your SQL server passwords (unless someone goes and resets the administrator password manually of course).
Now if you ever want to reset an SQL server password, simply taint the random_password, forcing it to be recreated with a new value, which in turn updates the azurerm_key_vault_secret value and then the azurerm_sql_server password.
Here's some quick HCL as an example
resource "random_password" "password" {
length = 16
special = false
}
resource "azurerm_key_vault_secret" "password_secret" {
depends_on = [<the Key Vault access policy for your infra agent which runs terraform apply>]
...
value = random_password.password.result
...
}
resource "azurerm_sql_server" "sql_server" {
...
administrator_login_password = azurerm_key_vault_secret.password_secret.value
...
}
Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
This is a sensible approach, but remember that billing is per secret, key or cert and Key Vaults themselves are free. It's recommended to create a Key Vault for each application because access policies can only be applied per Key Vault and not per secret/key/cert.
We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
Why is this only known at runtime? This sounds like a limitation of your own process since Terraform allows you to specify a name for each Key Vault when you create it. Reconsider if this is really a requirement and why you are doign this. If it definitely is a requirement and your Key Vault names are dynamically generated, then you can use terraform output to get the Key Vault name during the pipeline and set it as a variable during the build.
To fetch the Key Vault name as an output just use the following HCL
output "key_vault_name" {
value = "${azurerm_key_vault.demo_key_vault.name}"
}
and run `terraform output key_vault_name" to write the value to stdout.
Related
az keyvault secret list --vault-name $VaultName --query "[?attributes.expires<='2022-06-30']" -o table
Output:
ContentType | Name
------------- ----------
Content1 KV-Secret1
Content2 KV-Secret2
The main purpose to store output into array is, I want to get two values into different variables i.e,Required Values $varibale1 = Content1 , $variable2 = KV-Secret1
I need to list all the secrets from specific keyvault that is going to expire in 30 days from current date, then I need two values of that secrets - 1.secret name and 2.secrettype, that values will be use into another script to reset the secrets expiry date.
Thank You sikumars-msft for your suggestion, Posting your suggestion as community wiki so other who encounter the same requirement so it will be benficial for them
You must specify each keys vault names in URI (let say: "GET", "/" + https://keyvaults["Vault_Uri"] + "keys?api-version=7.0" or "GET", "/" + https://keyvaults["Vault_Uri"] + "secrets?api-version=7.0" ) to retrieve expiry dates of respective keys and secrets because these information are part of the data plane which allows you to work with the data stored in a key vault. Hence, you can't use management plane endpoint "GET", '/subscriptions/xxxxx/providers/Microsoft.KeyVault/vaults?api-version=2019-09-01' to retrieve information about data stored in Azure KeyVault.
To learn more about different type of key vault plane, refer: https://learn.microsoft.com/en-us/answers/questions/25726/what-is-management-and-data-plane-in-azure-key-vau.html
Therefore, you need to get all key vault name created in your subscription and load them into some variable then you can retrieve expiry date of keys and secrets accordingly. In case if you can't retrieve Key Vault name from variable then you could think so using alternative approach of enabling Azure Key vault logging to monitor Microsoft.KeyVault.SecretNearExpiry to get notification using Azure automation (Event grid) or Logic App as explained below:
Azure Key Vault logging: https://learn.microsoft.com/en-us/azure/key-vault/general/logging?tabs=Vault
Creating a Logic App to remind Key Vault key Expiry: https://learn.microsoft.com/en-us/answers/questions/398632/creating-a-logic-app-to-remind-key-
I have some secret which is created using terraform , due to some mistake I had commented and applied tf so the reource marked for deletion, but now if I enable it and apply it is saying the resource is marked for deletion.
resource "aws_secretsmanager_secret" "rotation-example" {
name = "mysecret"
description ="sccretatexample"
recovery_windows_in_days = 7
}
I can't change the name and create other resource, and also I dont have access to aws console/cli . pls guide me how to create again or is it possible to use the old one by overriding
As of now there is no functionality available to retrieve deleted secret using terraform. Check this open issue -
https://github.com/terraform-providers/terraform-provider-aws/issues/10259
But you can do it using some manual work but either you will require help from your AWS administrator or AWS access key should be having below permission.
To restore a secret and the metadata in the console, you must have these permissions:
secretsmanager:ListSecrets – Use to navigate to the secret you want to restore.
secretsmanager:RestoreSecret – Use to delete any versions still associated with the secret.
if AWS access key have above permission use below cmd to restore password.
aws secretsmanager restore-secret --secret-id mysecret
follow this AWS document to restore secret.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-restore-secret.html
once secret is restored you can use "terraform import" as below to updated you state file with existing secret details.
terraform import aws_secretsmanager_secret.rotation-example mysecret
In addition, if you want to create and delete secret frequently use below.
recovery_windows_in_days = 0
So I am trying to find someway to hide a secret in Terraform. The caveat is the secret is a Service Principal that is used to connect to our Key Vault. I can't store the secret in the Key Vault as it hasn't connected to the Key Vault yet at that point. This is part of my main tf file.
provider "azurerm" {
alias = "kv_prod"
version = "1.28"
tenant_id = "<tenant id>"
subscription_id = "<sub id>"
client_id = "<SP client id>"
client_secret = "<SP secret>"
}
This is used further down my module to store Storage Account keys and other secrets. It just happens to be in a Prod subscription that not everyone has access to.
Has anyone run into something like this? If so, how would you go about securing that secret?
#maltman There are several ways to hide a secret in terraform. Here is a blog that talks about them:
https://www.linode.com/docs/applications/configuration-management/secrets-management-with-terraform/
However if you are only concerned about encrypting the secrets file while checking in and checking out from git, you can use something like git-crypt
You would have to create a couple of files:
variables.tf -> Define your variables here
variable "client_secret" {
description = "Client Secret"
}
terraform.tfvars -> Give the value of the variable here
client_secret = 'your-secret-value'
Now use git-crypt to encrypt terraform.tfvars while checking into git
For your requirements, I think there are two secure ways for you in comparison.
One is that stored the credential as environment variables so that you do not expose the secret in the tf files. Here's the example.
The other one is that you can log in with the credential for Azure CLI, then just need to set the subscription without exposing the secret in the tf file. Here's the example.
The above two ways are that what I think is secure and possible for you. Hope it helps you.
Terraform doesn't have this feature but by using third party integration it can be achieved.
Storing Secret in Terraform:
Terraform has an external data resource that can be used to run an external program and use the return value further. I have used Ansible vault feature to encrypt and decrypt the secrets and store it encrypted in repository rather as plaintext.
data "external" "mysecret" {
program = ["bash", "-c", "${path.module}/get_ansible_secret.sh"]
query = {
var = "${var.secret_value}"
vault_password_file = "${path.module}/vault-password.sh"
# The file containing the secret we want to decrypt
file = "${var.encrypted_file}"
}
}
Refer the working example: github example
Going to create an ADO pipeline to handle this instead where the code just does not have to be available.
I would like some help / guidance on how to securely access SSM Parameter store for the (decrypted) value on an existing secureString for use in other terraform resources?
e.g we have a github access token stored in SSM for CI - I need to pass this value to the GitHub provider to enable webhooks for codepipeline.
The SSM Parameter is not something managed from terraform, but its decrypted value can be used.
Is this insecure given the value would end up in the state file? What is the best practice for this type of use case?
Many thanks!
You can use the data source to reference an already existing resource:
data "aws_ssm_parameter" "foo" {
name = "foo"
}
one of the properties of the data source is value, which contains the actual value of the parameter. You can use this elsewhere in your terraform code:
data.aws_ssm_parameter.foo.value
In Azure Portal > Key vaults > Secrets, I have secrets with json values (I did not create it). Something like:
...
"SubscriptionId": "XXXXXXX",
"BaseAuthUri": "https://login.microsoftonline.com/XXXXX/oauth/authorize?client_id="&api-version=
...
I would like to add another url value to it. How can I edit the
secrets with Azure portal?
How the value of api-version set?
Thanks
You can only change secret attributes such as expiration date, activation date. You cannot change secret's value programatically or via Azure Portal. If you want to update your secret without creating a new vault (meaning the secret identifier still remains intact) you can create a new version of the existing secret.
If the secret value contains the variables to get authorization code, you don't need api version because the URI you call is the authorization endpoint.