Best Practice for BICEP to Ignore Overwriting Existing KeyVault Secrets - azure

I have the following GUID that is generated in my BICEP module and used as a value in a KeyVault secret
param keyVaultName string
param apiKey string = newGuid()
resource apikey_secret 'Microsoft.KeyVault/vaults/secrets#2021-11-01-preview' = {
name: '${keyVaultName}/ApiKey'
properties:{
value: apiKey
attributes:{
enabled: true
}
}
}
Every time I run the BICEP files this GUID is generated and replaces the previous value. My preference is for this to only be generated on the first run and then ignored if it exits on any subsequent run.
I came across this solution which uses tags to track existing secrets and then conditionals within the BICEP file checking to see if the tag exists.
I feel like there should be a more elegant solution than having to manage tags in addition to secrets but cannot find anything in the docs so far.

There isn't any way to do a "deploy only if it doesn't exist" in bicep/ARM - ARM is declarative so will always seek the goal specified in the template.
Another option you can consider is to use a deterministic guid that way it won't change, but someone with knowledge of how the function works could "determine" the secret value, e.g.
#secure
param apiKey string = guid(apikey_secret.id)
Nit - in your code snippet the param is not secure so someone with permission to the deployment at scope can retrieve the value.

Related

When building Terraform Provider, allow update to resource config without triggering required change

Is there any way to have a schema attribute for a resource to allow updates, but not actually be identified as a change needed to the resource? Or, is there a way to use something like HasChangesExcept within DiffSuppressFunc or CustomizeDiff?
I'm looking to implement a feature where a resource is optionally dependent on a datasource for something that is not actually part of configuration. In this case, the datasource is providing an access token that is used to create and interact with the resource. Yes, this should just be part of provider configuration, but I'm looking to avoid including this feature in the provider config for a few reasons. Mainly, I'm building the feature to cover a gap with backend software, the feature will be unnecessary eventually.
Here's an example of the hcl I'm trying to achieve:
datasource "auth" "foo {
user
pw
}
resource "service" "foo" {
name = bar
details = baz
access_token = datasource.auth.foo.access_token
}
Because the auth datasource creates a new access_token on each plan, each plan also identifies a change to the service resource.
I believe an appropriate workaround here would be that the access_token in state for the resource is only updated when any of the other fields of the resource is updated.
To accomplish this, I looked at DiffSuppressFunc and CustomizeDiff. However, both are missing a feature I'd look for:
DiffSuppressFunc - receives *schema.ResourceData which is great, but at this level the diffs attribute of ResourceData isn't populated, this renders the HasChangesExcept method unusable.
CustomizeDiff - receives *schema.ResourceDiff which does have access to diffs, but does not have HasChangesExcept implemented. This would be okay for one resource, but I'm looking to implement the access_token option to multiple resources and wanted to check all the keys dynamically.

Error: error creating Secrets Manager Secret: ResourceExistsException: The operation failed because the secret project_lambda already exists

I am working on a project that uses a Jenkinsfile and given the name of a lambda it creates this lambda in AWS along with its terraform configuration, and uses AWS Secrets Manager to grab the secrets.
I have created the secrets via terraform and essentially want to keep all of the secrets for each of the lambdas centralized in one location ("project_lambda")
The tf looks like this (there is a policy as well, but has been omitted):
resource "aws_secretsmanager_secret" "project_lambda" {
name = "project_lambda"
description = "Secrets for project"
recovery_window_in_days = 0
}
resource "aws_secretsmanager_secret_version" "sversion" {
secret_id = aws_secretsmanager_secret.project_lambda.id
secret_string = jsonencode(var.map_of_secrets)
}
The pipeline generated the secrets fine, and re-ran fine when it was only one lambda. But when I added in another (they have seperate state), this error comes up!
Error: error creating Secrets Manager Secret: ResourceExistsException: The operation failed because the secret project_lambda already exists.
I tried commenting out the code, but then it marked the secret for deletion and I had to manually delete it.
Any ideas for what the approach should be to solve this? Can I force recreation of the secret, delete then create, or delete that code and somehow have it not marked for deletion?

Azure Bicep ( key vault secret passing as a parameter to local variable)

I am new to Azure Bicep.I am trying to use the key vault secret name and value for the virtual machine (Window) credential. But I am facing a problem with passing the name and value of the key vault as a parameter to a local variable. Anyone who can guide me regarding this matter?
#description('Password for the Virtual Machine.')
#secure()
param adminPassword string = keyVault.getSecret()
You can't use the getSecret() function in the main.bicep file (i.e. as a defaultValue) - you can only use that in a module within a bicep file. #Deep has a link for that...
If you want to pass the secret to main.bicep you need to use a parameter reference in a parameter file, see: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-key-vault#edit-the-parameters-file

ADF error when reusing parameter in replace function

I am parameterizing a Linked Service (SQL server) in ADF, but have trouble reusing parameters for different service properties as Dynamic Content.
I have created more parameters for the SQL Server properties:
ServerName
Environment
DatabaseName
DBUserName
A Key Vault is used to store sensitive information for the properties, where the Secret names are created like like "POC-USER-MYDOMAIN-MYUSER".
The DBUserName parameter for the Linked Service contains a Windows Login like "MyDomain\MyUser". I use the DBUserName parameter for property "User name" and for the password stored in Key Vault.
Property "User name" has this dynamic content "#{linkedService().DBUserName}", and the Key Vault Secret name has this dynamic content "#{linkedService().Environment}-USER-#{replace(linkedService().DBUserName, '', '-')}".
Linked service
When execute "Test connection" I use these parameters:
Parameters
And "Test connection" returns this error:
Error
I can get it working, if I create a new parameter named "DBUserNameCopy", copy value from "DBUserName". Then change either property "User name" or property "Key Vault Secret name" dynamic content to use the new parameter. And execute "Test connection" with:
DoubleParameters
So the two properties dynamic content is working correct, but only if they don't share one parameter.
I tried different things to avoid this error, but ended up with the conclusion: You can not use same parameter in more properties, if you use the replace function (I don't know if it's related to all functions).
Anyone know how to get this to work?
I tried this scenario, and it seems that you cannot use the same linked service parameter in two dynamic expressions. In your case you used the DBUsername twice, once in the user name dynamic expression and the second in constructing the key vault secret name. Aside from your workaround, to create a parameter with a different name, I would manipulate the value you pass to the key vault secret name parameter outside the linked service, do this in the data set that references the linked service, in the data set definition, include the dynamic expression that prepares the parameter value.

Terraform - Access SSM Parameter Store Value Access

I would like some help / guidance on how to securely access SSM Parameter store for the (decrypted) value on an existing secureString for use in other terraform resources?
e.g we have a github access token stored in SSM for CI - I need to pass this value to the GitHub provider to enable webhooks for codepipeline.
The SSM Parameter is not something managed from terraform, but its decrypted value can be used.
Is this insecure given the value would end up in the state file? What is the best practice for this type of use case?
Many thanks!
You can use the data source to reference an already existing resource:
data "aws_ssm_parameter" "foo" {
name = "foo"
}
one of the properties of the data source is value, which contains the actual value of the parameter. You can use this elsewhere in your terraform code:
data.aws_ssm_parameter.foo.value

Resources