Terraform - Access SSM Parameter Store Value Access - terraform

I would like some help / guidance on how to securely access SSM Parameter store for the (decrypted) value on an existing secureString for use in other terraform resources?
e.g we have a github access token stored in SSM for CI - I need to pass this value to the GitHub provider to enable webhooks for codepipeline.
The SSM Parameter is not something managed from terraform, but its decrypted value can be used.
Is this insecure given the value would end up in the state file? What is the best practice for this type of use case?
Many thanks!

You can use the data source to reference an already existing resource:
data "aws_ssm_parameter" "foo" {
name = "foo"
}
one of the properties of the data source is value, which contains the actual value of the parameter. You can use this elsewhere in your terraform code:
data.aws_ssm_parameter.foo.value

Related

When building Terraform Provider, allow update to resource config without triggering required change

Is there any way to have a schema attribute for a resource to allow updates, but not actually be identified as a change needed to the resource? Or, is there a way to use something like HasChangesExcept within DiffSuppressFunc or CustomizeDiff?
I'm looking to implement a feature where a resource is optionally dependent on a datasource for something that is not actually part of configuration. In this case, the datasource is providing an access token that is used to create and interact with the resource. Yes, this should just be part of provider configuration, but I'm looking to avoid including this feature in the provider config for a few reasons. Mainly, I'm building the feature to cover a gap with backend software, the feature will be unnecessary eventually.
Here's an example of the hcl I'm trying to achieve:
datasource "auth" "foo {
user
pw
}
resource "service" "foo" {
name = bar
details = baz
access_token = datasource.auth.foo.access_token
}
Because the auth datasource creates a new access_token on each plan, each plan also identifies a change to the service resource.
I believe an appropriate workaround here would be that the access_token in state for the resource is only updated when any of the other fields of the resource is updated.
To accomplish this, I looked at DiffSuppressFunc and CustomizeDiff. However, both are missing a feature I'd look for:
DiffSuppressFunc - receives *schema.ResourceData which is great, but at this level the diffs attribute of ResourceData isn't populated, this renders the HasChangesExcept method unusable.
CustomizeDiff - receives *schema.ResourceDiff which does have access to diffs, but does not have HasChangesExcept implemented. This would be okay for one resource, but I'm looking to implement the access_token option to multiple resources and wanted to check all the keys dynamically.

Best Practice for BICEP to Ignore Overwriting Existing KeyVault Secrets

I have the following GUID that is generated in my BICEP module and used as a value in a KeyVault secret
param keyVaultName string
param apiKey string = newGuid()
resource apikey_secret 'Microsoft.KeyVault/vaults/secrets#2021-11-01-preview' = {
name: '${keyVaultName}/ApiKey'
properties:{
value: apiKey
attributes:{
enabled: true
}
}
}
Every time I run the BICEP files this GUID is generated and replaces the previous value. My preference is for this to only be generated on the first run and then ignored if it exits on any subsequent run.
I came across this solution which uses tags to track existing secrets and then conditionals within the BICEP file checking to see if the tag exists.
I feel like there should be a more elegant solution than having to manage tags in addition to secrets but cannot find anything in the docs so far.
There isn't any way to do a "deploy only if it doesn't exist" in bicep/ARM - ARM is declarative so will always seek the goal specified in the template.
Another option you can consider is to use a deterministic guid that way it won't change, but someone with knowledge of how the function works could "determine" the secret value, e.g.
#secure
param apiKey string = guid(apikey_secret.id)
Nit - in your code snippet the param is not secure so someone with permission to the deployment at scope can retrieve the value.

Azure Bicep ( key vault secret passing as a parameter to local variable)

I am new to Azure Bicep.I am trying to use the key vault secret name and value for the virtual machine (Window) credential. But I am facing a problem with passing the name and value of the key vault as a parameter to a local variable. Anyone who can guide me regarding this matter?
#description('Password for the Virtual Machine.')
#secure()
param adminPassword string = keyVault.getSecret()
You can't use the getSecret() function in the main.bicep file (i.e. as a defaultValue) - you can only use that in a module within a bicep file. #Deep has a link for that...
If you want to pass the secret to main.bicep you need to use a parameter reference in a parameter file, see: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-key-vault#edit-the-parameters-file

override deleted aws_secretsmanager_secret resource using terraform

I have some secret which is created using terraform , due to some mistake I had commented and applied tf so the reource marked for deletion, but now if I enable it and apply it is saying the resource is marked for deletion.
resource "aws_secretsmanager_secret" "rotation-example" {
name = "mysecret"
description ="sccretatexample"
recovery_windows_in_days = 7
}
I can't change the name and create other resource, and also I dont have access to aws console/cli . pls guide me how to create again or is it possible to use the old one by overriding
As of now there is no functionality available to retrieve deleted secret using terraform. Check this open issue -
https://github.com/terraform-providers/terraform-provider-aws/issues/10259
But you can do it using some manual work but either you will require help from your AWS administrator or AWS access key should be having below permission.
To restore a secret and the metadata in the console, you must have these permissions:
secretsmanager:ListSecrets – Use to navigate to the secret you want to restore.
secretsmanager:RestoreSecret – Use to delete any versions still associated with the secret.
if AWS access key have above permission use below cmd to restore password.
aws secretsmanager restore-secret --secret-id mysecret
follow this AWS document to restore secret.
https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_delete-restore-secret.html
once secret is restored you can use "terraform import" as below to updated you state file with existing secret details.
terraform import aws_secretsmanager_secret.rotation-example mysecret
In addition, if you want to create and delete secret frequently use below.
recovery_windows_in_days = 0

Azure DevOps terraform and AKV

In our case we are doing the following:
1. Infra Agent
a. We create a KV
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
c. We store the username and password as secrets in the newly created KV
2. Data Agent
a. We want to deploy the DDL from the repos onto the SQL Database we created in Infra Agent. We need to use the SQL database username and password stored in the KV to do so
b. In order to read the secrets from the KV our current thinking is to insert the username and password to pipeline parameters in step 1 (i.e. setting them at runtime) so we can reuse the values across other Agents.
A couple of questions:
- Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
- What is best practice to access the Database username and password in other Agents, given that:
o We can’t use variable groups because the KV and values won’t be known until runtime
o We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
If you're using Key Vault, then I assume you're talking about Azure SQL Databases. However at the moment Terraform only supports assigning a administrator username and password for the SQL Server instance, not SQL databases.
In this case, I recommend using random_password resources to assign values to azurerm_key_vault_secret which can then be assigned as the azurerm_sql_server administrator password.
With this setup you know for certain that the password in Key Vault is always in sync, and can be treated as the source of truth for your SQL server passwords (unless someone goes and resets the administrator password manually of course).
Now if you ever want to reset an SQL server password, simply taint the random_password, forcing it to be recreated with a new value, which in turn updates the azurerm_key_vault_secret value and then the azurerm_sql_server password.
Here's some quick HCL as an example
resource "random_password" "password" {
length = 16
special = false
}
resource "azurerm_key_vault_secret" "password_secret" {
depends_on = [<the Key Vault access policy for your infra agent which runs terraform apply>]
...
value = random_password.password.result
...
}
resource "azurerm_sql_server" "sql_server" {
...
administrator_login_password = azurerm_key_vault_secret.password_secret.value
...
}
Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
This is a sensible approach, but remember that billing is per secret, key or cert and Key Vaults themselves are free. It's recommended to create a Key Vault for each application because access policies can only be applied per Key Vault and not per secret/key/cert.
We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
Why is this only known at runtime? This sounds like a limitation of your own process since Terraform allows you to specify a name for each Key Vault when you create it. Reconsider if this is really a requirement and why you are doign this. If it definitely is a requirement and your Key Vault names are dynamically generated, then you can use terraform output to get the Key Vault name during the pipeline and set it as a variable during the build.
To fetch the Key Vault name as an output just use the following HCL
output "key_vault_name" {
value = "${azurerm_key_vault.demo_key_vault.name}"
}
and run `terraform output key_vault_name" to write the value to stdout.

Resources