ADF error when reusing parameter in replace function - azure

I am parameterizing a Linked Service (SQL server) in ADF, but have trouble reusing parameters for different service properties as Dynamic Content.
I have created more parameters for the SQL Server properties:
ServerName
Environment
DatabaseName
DBUserName
A Key Vault is used to store sensitive information for the properties, where the Secret names are created like like "POC-USER-MYDOMAIN-MYUSER".
The DBUserName parameter for the Linked Service contains a Windows Login like "MyDomain\MyUser". I use the DBUserName parameter for property "User name" and for the password stored in Key Vault.
Property "User name" has this dynamic content "#{linkedService().DBUserName}", and the Key Vault Secret name has this dynamic content "#{linkedService().Environment}-USER-#{replace(linkedService().DBUserName, '', '-')}".
Linked service
When execute "Test connection" I use these parameters:
Parameters
And "Test connection" returns this error:
Error
I can get it working, if I create a new parameter named "DBUserNameCopy", copy value from "DBUserName". Then change either property "User name" or property "Key Vault Secret name" dynamic content to use the new parameter. And execute "Test connection" with:
DoubleParameters
So the two properties dynamic content is working correct, but only if they don't share one parameter.
I tried different things to avoid this error, but ended up with the conclusion: You can not use same parameter in more properties, if you use the replace function (I don't know if it's related to all functions).
Anyone know how to get this to work?

I tried this scenario, and it seems that you cannot use the same linked service parameter in two dynamic expressions. In your case you used the DBUsername twice, once in the user name dynamic expression and the second in constructing the key vault secret name. Aside from your workaround, to create a parameter with a different name, I would manipulate the value you pass to the key vault secret name parameter outside the linked service, do this in the data set that references the linked service, in the data set definition, include the dynamic expression that prepares the parameter value.

Related

Azure Bicep ( key vault secret passing as a parameter to local variable)

I am new to Azure Bicep.I am trying to use the key vault secret name and value for the virtual machine (Window) credential. But I am facing a problem with passing the name and value of the key vault as a parameter to a local variable. Anyone who can guide me regarding this matter?
#description('Password for the Virtual Machine.')
#secure()
param adminPassword string = keyVault.getSecret()
You can't use the getSecret() function in the main.bicep file (i.e. as a defaultValue) - you can only use that in a module within a bicep file. #Deep has a link for that...
If you want to pass the secret to main.bicep you need to use a parameter reference in a parameter file, see: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-key-vault#edit-the-parameters-file

Azure DevOps terraform and AKV

In our case we are doing the following:
1. Infra Agent
a. We create a KV
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
c. We store the username and password as secrets in the newly created KV
2. Data Agent
a. We want to deploy the DDL from the repos onto the SQL Database we created in Infra Agent. We need to use the SQL database username and password stored in the KV to do so
b. In order to read the secrets from the KV our current thinking is to insert the username and password to pipeline parameters in step 1 (i.e. setting them at runtime) so we can reuse the values across other Agents.
A couple of questions:
- Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
- What is best practice to access the Database username and password in other Agents, given that:
o We can’t use variable groups because the KV and values won’t be known until runtime
o We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
b. We create a SQL Database in the tf script, including assigning an admin username and password (randomly generated value).
If you're using Key Vault, then I assume you're talking about Azure SQL Databases. However at the moment Terraform only supports assigning a administrator username and password for the SQL Server instance, not SQL databases.
In this case, I recommend using random_password resources to assign values to azurerm_key_vault_secret which can then be assigned as the azurerm_sql_server administrator password.
With this setup you know for certain that the password in Key Vault is always in sync, and can be treated as the source of truth for your SQL server passwords (unless someone goes and resets the administrator password manually of course).
Now if you ever want to reset an SQL server password, simply taint the random_password, forcing it to be recreated with a new value, which in turn updates the azurerm_key_vault_secret value and then the azurerm_sql_server password.
Here's some quick HCL as an example
resource "random_password" "password" {
length = 16
special = false
}
resource "azurerm_key_vault_secret" "password_secret" {
depends_on = [<the Key Vault access policy for your infra agent which runs terraform apply>]
...
value = random_password.password.result
...
}
resource "azurerm_sql_server" "sql_server" {
...
administrator_login_password = azurerm_key_vault_secret.password_secret.value
...
}
Is that the right approach? Should KV be created in the Infra Agent tf scripts? Should we randomly generate passwords (as secrets)?
This is a sensible approach, but remember that billing is per secret, key or cert and Key Vaults themselves are free. It's recommended to create a Key Vault for each application because access policies can only be applied per Key Vault and not per secret/key/cert.
We can’t use the Key Vault Task (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-key-vault?view=azure-devops) to read from the KV because the KV name is only known at runtime (via the tf vars file)
Why is this only known at runtime? This sounds like a limitation of your own process since Terraform allows you to specify a name for each Key Vault when you create it. Reconsider if this is really a requirement and why you are doign this. If it definitely is a requirement and your Key Vault names are dynamically generated, then you can use terraform output to get the Key Vault name during the pipeline and set it as a variable during the build.
To fetch the Key Vault name as an output just use the following HCL
output "key_vault_name" {
value = "${azurerm_key_vault.demo_key_vault.name}"
}
and run `terraform output key_vault_name" to write the value to stdout.

Terraform - Access SSM Parameter Store Value Access

I would like some help / guidance on how to securely access SSM Parameter store for the (decrypted) value on an existing secureString for use in other terraform resources?
e.g we have a github access token stored in SSM for CI - I need to pass this value to the GitHub provider to enable webhooks for codepipeline.
The SSM Parameter is not something managed from terraform, but its decrypted value can be used.
Is this insecure given the value would end up in the state file? What is the best practice for this type of use case?
Many thanks!
You can use the data source to reference an already existing resource:
data "aws_ssm_parameter" "foo" {
name = "foo"
}
one of the properties of the data source is value, which contains the actual value of the parameter. You can use this elsewhere in your terraform code:
data.aws_ssm_parameter.foo.value

New-AzureStorageContext: Endpoint vs Environment

For the New-AzureStorageContext cmdlet, assuming I know the value for both, what are the differences between the Endpoint and Environment parameters?
For example, let's say I want a new a new storage context named foo within the Azure China cloud, so the environment is AzureChinaCloud and the endpoint would be foo.core.chinacloudapi.cn I can pass either of those in as a parameter to the cmdlet, although it appears from the docs that they are mutually exclusive.
What would functionally be different between passing one or the other? If I pass Environment, does the created storage context not have an endpoint? If it does have an endpoint, is it not set to foo.core.chinacloudapi.cn? If I pass the Endpoint, does the context not get set to AzureChinaCloud? Is it even possible not to have one or the other?
Furthermore, the returned AzureStorageContext has a number of properties related to endpoint: BlobEndPoint, EndPointSuffix, FileEndPoint, QueueEndPoint, and TableEndPoint. Which of these properties should be set when passing either Environment or Endpoint?
what are the differences between the Endpoint and Environment parameters?
Endpoint: it contains the storage account name and the azure environment, if AzureCloud, it will be storagename.core.windows.net, if AzureChinaCloud, it will be storagename.core.chinacloudapi.cn like you mentioned, it belongs to Optional Parameters.
Environment: it just specify the environment, not contain the storage account name.
If I pass Environment, does the created storage context not have an endpoint? If it does have an endpoint, is it not set to foo.core.chinacloudapi.cn? If I pass the Endpoint, does the context not get set to AzureChinaCloud?
I think you dont need to use both of them, if you pass the account name and environment, the context will have the endpoint, also, if you pass the endpoint, it will have the environment.
Is it even possible not to have one or the other?
Of course, you could use other parameters to create the context, e.g. you could just use StorageAccountName and StorageAccountKey to create the context, you could find it in the doc you mentioned.
Which of these properties should be set when passing either Environment or Endpoint?
The four properties are all decided by the endpoint, if you pass the endpoint, I think you dont need to set them. e.g. If the endpoint is storagename.core.windows.net, the BlobEndPoint will be storagename.blob.core.windows.net.
Update:
what are the differences between the Endpoint and Environment parameters?
Different environments decide different endpoints, you could check it by the command: Get-AzureRmEnvironment. if AzureCloud, it will be core.windows.net, if AzureChinaCloud, it will be core.chinacloudapi.cn
AzureCloud:
AzureChinaCloud:
If I pass Environment, does the created storage context not have an endpoint? If it does have an endpoint, is it not set to foo.core.chinacloudapi.cn? If I pass the Endpoint, does the context not get set to AzureChinaCloud?
You could just use one of them, refer to the screenshot, note: actually the Endpoint means EndPointSuffix, you could find it in my test result. So we should pass e.g. -Endpoint "core.windows.net" instead of -Endpoint "storagename.core.windows.net". If we pass -Endpoint "storagename.core.windows.net", it will be incorrect, the EndPointSuffix will be storagename.core.windows.net in the result.
Incorrect result:
Which of these properties should be set when passing either Environment or Endpoint?
In the above screenshots, I dont pass any of these four endpoints, but you could find the result will have all of them. Also you dont need to pass EndPointSuffix, it equals EndPoint.

Azure Function - Change the name of the variable "code"

According the documentation the HttpTrigger API Key has the variable name code, like this:
https://<yourapp>.azurewebsites.net/api/<function>?code=<ApiKey>
Can I change this variable name? In my case I want to change it to access_token like this:
https://<yourapp>.azurewebsites.net/api/<function>?access_token=<ApiKey>
I want to do this because I want to use Azure Functions together with a 3rd party webhook that is expects access_token as the variable name.
No that name cannot be changed - it is part of our API and isn't configurable.

Resources