How safe/protect Azure service principal secret - azure

My deploy task using PowerShell script, which use Service Principal for connection to Azure KeyVault for pull secret. Secret (password) store in PowerShell script's code as plain text. Maybe there is another solution how to minimize token viewing.
And also i use powershell inline mode (not separate script) with Azure DevOps Secret Variable in deploy task, but this solution difficult to support (script has several different operations, so you have to keep many versions of the script).
Script is store in Git repository, anyone who has access to it will be able to see the secret and gain access to other keys. Perhaps I don't understand this concept correctly, but if keys cannot be stored in the code, then what should I do?

I devops you can use variable groups and define that the variables is pulled directly from a selected keyvault (if the service principal you have selected have read/list access to the KV) LINK.
This means that you can define all secrets in keyvault, and they would be pulled before any tasks happens in your yaml. To be able to use them in the script you can define them as a env variable or parameter to your script and just reference $env:variable or just $variable, instead of having the secret hardcoded in your script.

Related

Can docker on Azure Linux App Service authenticate with the ACR without us specifying the password in the app settings?

We deploy a Linux App Service to Azure using terraform. The relevant configuration code is:
resource "azurerm_app_service" "webapp" {
app_settings = {
DOCKER_REGISTRY_SERVER_URL = "https://${local.ctx.AcrName}.azurecr.io"
DOCKER_REGISTRY_SERVER_USERNAME = data.azurerm_key_vault_secret.acr_admin_user.value
DOCKER_REGISTRY_SERVER_PASSWORD = data.azurerm_key_vault_secret.acr_admin_password.value
...
}
...
}
The problem is that terraform does not consider app_settings a secret and so it outputs in the clear the DOCKER_REGISTRY_SERVER_PASSWORD value in the Azure DevOps output (I obfuscated the actual values):
So, I am wondering - can docker running on an Azure Linux App Service host authenticate with the respective ACR without us having to pass the password in a way that makes it so obvious to every one who can inspect the pipeline output?
The following article seems relevant in general - https://docs.docker.com/engine/reference/commandline/login, but it is unclear how we can apply it in my context, if at all.
Also, according to https://feedback.azure.com/forums/169385-web-apps/suggestions/36145444-web-app-for-containers-acr-access-requires-admin#%7Btoggle_previous_statuses%7D Microsoft has started working on something relevant, but looks like this is still a work in progress (almost 5 months).
I'm afraid you must set the environment variables about DOCKER_REGISTRY_* to pull the images from the ACR, it's the only way to do that designed by Azure. But for the sensitive info about the password, it also provides a way to hide it. You can use the Key Vault to store the password in secret, and then get the password from the secret. Take a look at the document Use Key Vault references for App Service. So you can change the app_setting for the password like this:
DOCKER_REGISTRY_SERVER_PASSWORD = "#Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931)"
Or
DOCKER_REGISTRY_SERVER_PASSWORD = "#Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret;SecretVersion=ec96f02080254f109c51a1f14cdb1931)"
Then it just shows the reference of the Key Vault, not the exact password.
Unfortunately Azure Web Apps do not support interacting with ACR using a managed identity, you must pass those Environment Variables to the App Service.
Terraform does not currently support applying a "sensitive" flag to arbitrary values. You can define outputs as sensitive, but it will not help with values you want to hide during the plan phase.
I would suggest checking out https://github.com/cloudposse/tfmask, using the TFMASK_RESOURCES_REGEX configuration to block the output you want to hide during your pipeline. If you're averse to adding dependencies, similar effect could be achieved by piping terraform apply through grep --invert-match "DOCKER_REGISTRY" instead.
#charles-xu has a good answer as well if you want to set up mappings between keyvault and your web app then push your tokens into kv secrets.
Now it's possible to use managed identity to pull images from ACR.
You may do the next:
go to your Container Registry page in the Azure portal
Open the tab Access Control (IAM)
The open Role assignments tab
Add role assignment AcrPull to your App Service or Function App
In the Deployment Center of your App Service choose Managed Identity for the Authentication setting.
Or you may use CLI by following the steps from the official documentation (link below):
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#use-managed-identity-to-pull-image-from-azure-container-registry
After you added role assignment DOCKER_REGISTRY_SERVER_URL, DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD settings may be removed from App Service's App Settings.

where to store the azure service principal data when using with terraform from CI or docker

I am reading all the terraform docs about using a service principal with a client secret when in CI or docker file or whatever and I quote:
We recommend using either a Service Principal or Managed Service Identity when running Terraform non-interactively (such as when running Terraform in a CI server) - and authenticating using the Azure CLI when running Terraform locally.
It then goes into great detail about creating a service principal and then gives an awful example at the end where the client id and client secret are hardcoded in the file by either storing them in environment variables:
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
or in the terraform provider block:
provider "azurerm" {
# Whilst version is optional, we /strongly recommend/ using it to pin the version of the Provider being used
version = "=1.43.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "${var.client_secret}"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It does put a nice yellow box about it saying do not do this but there is no suggestion of what to do.
I don't think client_secret in an environment variable is a particularly good idea.
Should I be using the client certificate and if so, the same question arises about where to keep the configuration.
I want to avoid azure-cli if possible.
Azure-cli will not return the client secret anyway.
How do I go about getting these secrets into environment variables? Should I be putting them into a vault or is there another way?
For your requirements, I think you're a little confused that how to choose a suitable one from the four ways.
You can see that the Managed Service Identity is only available for the services with the Managed Service Identity feature. So docker cannot use it. And you need also to assign it with appropriate permission as the service principal. You don't want to use Azure CLI if possible, I don't know why, but let's skip it first.
The service principal is a good way I think. It recommends you do not put the secret into a variable inside the Terraform file. So you can only use the environment variable. And if you also do not want to set the environment variable, then I don't think there is a way to use the service principal. The certificate for the service principal only needs to set the certificate path more than the other one.
And there is a caution for the service principal. You can see the secret of the service principal only one time when you finish creating it and then it will do not display anymore. If you forget, you can only reset the secret.
So I think the service principal is the most suitable way for you. You can set the environment variables with the parameter --env of the command docker run. Or just set them in the Dockerfile with ENV. The way to store the secret in the key vault, I think you can get the answer in my previous answer.

Dynamicallly get KeyVault secret in Azure DevOps Powershell script

We have an Azure Key Vault task in our release pipeline which downloads some secrets for use in the stage.
In an Inline Azure PowerShell script you can just use the following to get the secret value:
$secretValue = $(nameOfTheSecretInKeyVault)
This works fine.
However we want to move to using scripts in the repo, i.e. poiting the DevOps task to a file path i.e. /somePath/myScript.ps1
So I would need to parameterise the above line of code, as I cannot just change the name in the inline script like I'm currently doing, but I can't get it to work.
I have tried:
$compositeName = "${someParameter}-Application"
$secretValue1 = $($compositeName)
$secretValue2 = $("${compositeName}")
$secretValue3 = env:$compositeName
$secretValue4 = $(${compositeName})
The top line is just building up the name of the secret which it needs to look for. Unfortunately none of these work. Attempt #1, #2 and #4 come back with the string name only, not having actually got the secret value, and #3 errors saying it doesn't exist.
Is there a way to achieve this, or do I simply need to parameterise the secret and pass it into the script from the ADO task?
As you, I couldn't figure out a way to access the variables the log mentions are loaded in the Download secrets task of the job. It did work in inline mode, but not a chance with a script file.
So instead I leveraged the existing wiring (variable group linked to my KeyVault) and just run the command myself at the start of my script:
$mySecretValue = (Get-AzKeyVaultSecret -VaultName "myVault" -Name "mySecret").SecretValueText
From there I could use it as any other variable.
Either run your KeyVault tasks first, before your PowerShell script, or do it all in PowerShell.
You will need to create a service connection to your Azure subscription from Azure DevOps. Allow the service connection to access the KeyVault. Access the KeyVault from PowerShell or Azure CLI.
E.g. for PowerShell:
(Get-AzKeyVaultSecret -vaultName "Contosokeyvault" -name "ExamplePassword").SecretValueText
Here is a detailed walk through.
There is also native key vault integration now so you can just have your keys read in as a variable group transparently, no Keyvault-specific powershell code required.
https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml#link-secrets-from-an-azure-key-vault
One way to tackle this would be to add a parameter for your script to pass the release variable in when you call it, something like -secretValue $(nameOfTheSecretInKeyVault)
You should be able to use $env:nameOfTheSecretInKeyVault, but remember . become _
EDIT: Looking at your question again if you used env:$nameOfTheSecretInKeyVault you would have had an issue. It's $env:<variable_name>
If anyone comes across this in the future and is looking for a bash alternative, I ended up being able to do this with the following command
$(az keyvault secret show --name "${secret_name}" --vault-name "${vault_name} --query "value" | sed "s/\"//g")
This let's you get the value of the vault secret and use it wherever. The sed at the end is needed to drop the " that gets pulled out from the query

Deploying a VMSS and injecting secrets

I am wondering if there is any straightforward way of injecting files/secrets into the vms of a scaleset, either as you perform the (ARM) deployment or change the image.
This would be application-level passwords, certificates, and so on, that we would not want to be stored on the images.
I am using the linux custum script extension for the entrypoint script, and realize that it's possible to inject some secrets as parameters to that script. I assume this would not work with certificates however (too big/long), and it would not be very future-proof as we would need to redeploy the template (and rewrite the entrypoint script) whenever we want to add or remove a secret.
Windows based VMSS can get certificates from the KV directly during deployment, but linux ones cannot do that. Also, there is a customData property which allows you to pass in whatever you want (i think its limited to 64kb base64 encoded data), but that is not really flexible as well.
One way of solving this - write an init script that would use Managed Service Identity to get secrets from the Key Vault, this way you get several advantages:
You dont store secrets in the templates\vm configuration
You can update the secret and all the VMSS will get new version on the next deployment
You dont have to edit the init script unless secret names changed or new secrets got introduced.

Generate Azure Databricks Token using Powershell script

I need to generate Azure Databricks token using Powershell script.
I am done with creation of Azure Databricks using ARM template , now i am looking to generate Databricks token using powershell script .
Kindly let me know how to create Databricks token using Powershell script
The only way to generate a new token is via the api which requires you to have a token in the first place.
Or use the Web ui manually.
There is no official powershell commands for databricks, there are some unofficial ones but they still require you to generate a token manually first.
https://github.com/DataThirstLtd/azure.databricks.cicd.tools
Disclaimer I'm the author of these.
UPDATE: these powershell commands can now authenticate using a service principal instead of a bearer token (or can generate a bearer token for you).
so right now there is no way to use the API directly after deploying an Azure Databricks Workspace. I assume that you want to use it as part of an CI/CD pipeline - right? Reason is that you first need to manually create an API token which you can then use for all subsequent API requests.
But I will investigate and keep you updated here!
another option is to create it via terraform.
https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/token
mind you, it creates the token as whomever you az login'd as. so if you az login as yourself (when it spawns a browser asking who to log in as), that's who the token will be created as (assuming that user has permissions in the databricks workspace) and contributor (or custom read role, reader role doesn't grant the right permissions) permissions into the resource group that houses the workspace.
you can always use az login -u username#email.com -p to log in as someone else, assuming that user doesn't have MFA then run the terraform init/plan/apply. mind you, if you have a backend storage, that user also has to have permissions to that backend storage as well so it can create/update any tfstate files stored there.

Resources