Jenkins Config-As-Code: Azure KeyVault not injection secret values - azure

I'm deploying Jenkins via Helm with JCasC building up our application. I'm trying to use an AzureKeyVault to set secret values into my Config As Code, such as credentials. I'm configuring the Azure KeyVault via Config as Code as well, and am injecting the Client Secret using kubernetes secrets.
However, the problem I am running into is that the Config As Code plugin isn't using the Azure Keyvault as a secret source to inject the secret values. It goes through my JCasC yaml, states that there are no values in my self-defined secrets location (I mounted the Azure Client Secret as a volume in my deployment). Only when it gets to the config for the AzureKeyvault will it inject the client Secret from the mounted secrets, and then it finishes. All my other credentials are null, but the AzureKeyVault is connected.
My question is, do I need to define the AzureKeyVault connection very early on in my casc yaml? Is there something I can set to notify the casc plugin that I want to use an AzureKeyVault as my secret source, and the credentials are located in the casc file itself

Related

What is the point of Azure App Config KeyVault references?

In Azure you can setup an App Config and a KeyVault. The point of the KeyVault being to store more sensitive data than your App Config and be able to regulate access to the config and vault separately.
So what is the benefit of using a keyvault reference in the app config?
You are basically allowing anyone with access to the app config to access certain values in your keyvault and are bypassing the additional layer of security the vault normally provides.
The additional layer being required auth to the vault to access those same values if they aren't referenced in the config.
I really don't understand what benefit keyvault references give you.
This blog article by Jan de Vries explains them in more detail: https://jan-v.nl/post/2021/using-key-vault-with-azure-app-configuration/.
The relevant part for your question:
As it happens, the code for accessing App Configuration doesn’t give your application permission to retrieve secrets from Key Vault.
The application retrieves them from Key Vault, not from App Configuration.
App Config only holds the reference, not the actual value.
Official docs also mention this:
Your application uses the App Configuration client provider to retrieve Key Vault references, just as it does for any other keys stored in App Configuration. In this case, the values stored in App Configuration are URIs that reference the values in the Key Vault. They are not Key Vault values or credentials. Because the client provider recognizes the keys as Key Vault references, it uses Key Vault to retrieve their values.
Your application is responsible for authenticating properly to both App Configuration and Key Vault. The two services don't communicate directly.
I suppose there are different approaches to using the KeyVault, but the way I tend to use it is as follows.
My application will have a set of secrets, which I store locally using the Secrets Manager, you would add the secret for your application:
dotnet user-secrets set "Movies:ServiceApiKey" "12345"
Your application can then read this setting using _moviesApiKey = Configuration["Movies:ServiceApiKey"]; as you'll see in the link above. Obviously, there's no way you can see this value in the code, but your application can read it from the Secrets Manager.
If you do forget the values, you can use the following command to retrieve them:
dotnet user-secrets list
KeyVault will work as your Secrets Manager within Azure. So, your application will need to have permission to access the KeyVault, and in my case I store the Vault name in the appsettings.json, and during the bootstrapping, I include the KeyVault configuration if running in Production mode i.e. on the Azure Server and not locally.
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddConsole();
logging.AddAzureWebAppDiagnostics();
})
.ConfigureAppConfiguration((context, config) =>
{
if (context.HostingEnvironment.IsProduction())
{
IConfigurationRoot builtConfig = config.Build();
ConfigurationBuilder keyVaultConfigBuilder = new ConfigurationBuilder();
keyVaultConfigBuilder.AddAzureKeyVault(builtConfig["VaultName"]);
IConfigurationRoot keyVaultConfig = keyVaultConfigBuilder.Build();
config.AddConfiguration(keyVaultConfig);
}
})
.UseStartup<Startup>();
Note, the check for context.HostingEnvironment.IsProduction(). Within the appsettings, I have:
"VaultName": "https://yourkvname.vault.azure.net/"
So, the only reference I have to the KeyVault from the application is the name, and that should be secure as only the application will have access to the keys/secrets.
One thing to note, you need to make sure that the names match both for your local secrets and the ones in the KeyVault. In my case, I am running on a Windows platform, so I needed to make a small change to the names using double dashes (--) in place of the colon (:), so...
Movies:ServiceApiKey
Becomes
Movies--ServiceApiKey
When working in Azure, storing secrets in Key Vault is a good idea. And to make it better, there’s the Key Vault Reference notation. This feature makes sure no one can read the secret(s) unless someone grants permission.
Speaking of secrets, they should never be directly stored in application settings of a Function App (same goes for App Services by the way). Why not ? Because secrets would be available to anyone who has access to the Function App in the Azure Portal. The right way is to use an Azure Key Vault which is the Azure component for securely storing and accessing secrets 🔒. Once your secrets are in the key vault, you have to grant the Key Vault access to the identity of your Function App and you can then reference the secrets you need directly in your application settings. These are called Key Vault references because an application setting does not contain directly the value of a secret but a reference to the secret which is stored in Key Vault. When running, your function will automatically have access to the secret and its value as an environment variable, as if it was a normal application setting.
Key Vault references work for both App Services and Function Apps and are particularly useful for existing applications that have their secrets stored in settings because securing the secrets with Azure Key Vault references does not require any code change.
Reference: https://www.techwatching.dev/posts/azure-functions-custom-configuration
https://www.sharepointeurope.com/using-key-vault-references-with-azure-app-configuration/

Azure kubernetes - java spring app & managed identity to access key vault?

I am trying to Dockerize the java spring application and deploy it in Azure kubernetes. This application is connected to the database and currently it reads the connection string from the configuration file.
As this application will be Dockerized and deployed on AKS, I want to read the connection string from the Azure Key vault using managed identity.
Are there any samples available which demonstrates the above scenario?
You could store the connection as a keyvault secret, then use the java sdk to get it.
Make sure you have added your MSI(managed identity) to the keyvault access policy, then use the code below.
1.Create secret client
It uses DefaultAzureCredential to authenticate, don't set the environment vars, then it will use your MSI to authenticate automatically, you can also use ManagedIdentityCredentialBuilder instead of DefaultAzureCredentialBuilder, specify the clientId of your MSI.
import com.azure.identity.DefaultAzureCredentialBuilder;
import com.azure.security.keyvault.secrets.SecretClient;
import com.azure.security.keyvault.secrets.SecretClientBuilder;
SecretClient secretClient = new SecretClientBuilder()
.vaultUrl("<your-key-vault-url>")
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
2.Retrieve a secret
KeyVaultSecret secret = secretClient.getSecret("<secret-name>");
System.out.printf("Retrieved secret with name \"%s\" and value \"%s\"%n", secret.getName(), secret.getValue());
For more details see - Azure Key Vault Secret client library for Java - Version 4.2.0

How safe/protect Azure service principal secret

My deploy task using PowerShell script, which use Service Principal for connection to Azure KeyVault for pull secret. Secret (password) store in PowerShell script's code as plain text. Maybe there is another solution how to minimize token viewing.
And also i use powershell inline mode (not separate script) with Azure DevOps Secret Variable in deploy task, but this solution difficult to support (script has several different operations, so you have to keep many versions of the script).
Script is store in Git repository, anyone who has access to it will be able to see the secret and gain access to other keys. Perhaps I don't understand this concept correctly, but if keys cannot be stored in the code, then what should I do?
I devops you can use variable groups and define that the variables is pulled directly from a selected keyvault (if the service principal you have selected have read/list access to the KV) LINK.
This means that you can define all secrets in keyvault, and they would be pulled before any tasks happens in your yaml. To be able to use them in the script you can define them as a env variable or parameter to your script and just reference $env:variable or just $variable, instead of having the secret hardcoded in your script.

where to store the azure service principal data when using with terraform from CI or docker

I am reading all the terraform docs about using a service principal with a client secret when in CI or docker file or whatever and I quote:
We recommend using either a Service Principal or Managed Service Identity when running Terraform non-interactively (such as when running Terraform in a CI server) - and authenticating using the Azure CLI when running Terraform locally.
It then goes into great detail about creating a service principal and then gives an awful example at the end where the client id and client secret are hardcoded in the file by either storing them in environment variables:
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
or in the terraform provider block:
provider "azurerm" {
# Whilst version is optional, we /strongly recommend/ using it to pin the version of the Provider being used
version = "=1.43.0"
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "${var.client_secret}"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It does put a nice yellow box about it saying do not do this but there is no suggestion of what to do.
I don't think client_secret in an environment variable is a particularly good idea.
Should I be using the client certificate and if so, the same question arises about where to keep the configuration.
I want to avoid azure-cli if possible.
Azure-cli will not return the client secret anyway.
How do I go about getting these secrets into environment variables? Should I be putting them into a vault or is there another way?
For your requirements, I think you're a little confused that how to choose a suitable one from the four ways.
You can see that the Managed Service Identity is only available for the services with the Managed Service Identity feature. So docker cannot use it. And you need also to assign it with appropriate permission as the service principal. You don't want to use Azure CLI if possible, I don't know why, but let's skip it first.
The service principal is a good way I think. It recommends you do not put the secret into a variable inside the Terraform file. So you can only use the environment variable. And if you also do not want to set the environment variable, then I don't think there is a way to use the service principal. The certificate for the service principal only needs to set the certificate path more than the other one.
And there is a caution for the service principal. You can see the secret of the service principal only one time when you finish creating it and then it will do not display anymore. If you forget, you can only reset the secret.
So I think the service principal is the most suitable way for you. You can set the environment variables with the parameter --env of the command docker run. Or just set them in the Dockerfile with ENV. The way to store the secret in the key vault, I think you can get the answer in my previous answer.

Read secrets inside of Redhat OpenShift online?

I have gotten a Redhat OpenShift online starter vps, for hosting my discord bot. I've uploaded it to github, minus my discord token and other API keys, of course :^)
How would I get OpenShift to use store and read client secrets?
I'm using the nodejs8 framework if that helps.
Secrets have no place in a source version control hosting service like GitHub.
Regarding OpenShift, it includes Secrets, an encoded-64 configmap in which you can inject confidential information.
But that long-term confidential information storage (to be injected in OpenShift secrets) ought to be stored in a proper Vault.
Like, for instance, the Hashicorp Vault, as described by the article "Managing Secrets on OpenShift – Vault Integration"
The rest describes that solution, but even if you don't use that particular host, the general idea (an external vault-type storage) remains:
An Init Container (run before the main container of a pod is started) requests a wrapped token from the Vault Controller over an encrypted connection.
Wrapped credentials allow you to pass credentials around without any of the intermediaries having to actually see the credentials.
The Vault Controller retrieves the pod details from the Kubernetes API server.
If the pod exists and contains the vaultproject.io/policies annotation, the Vault Controller calls Vault and generates a unique wrapped token with access to the Vault policies mentioned in the annotation. This step requires trust on pod author to have used to right policies. The generated token has a configurable TTL.
The Vault Controller “calls back” the Init Container using the pod IP obtained from the Kubernetes API over an encrypted connection and delivers it the newly created wrapped token. Notice that the Vault Controller does not trust the pod, it only trusts the master API.
The Init Container unwraps the token to obtain a the Vault token that will allow access to the credentials.
The Vault token is written to a well-known location in a volume shared between the two containers (emptyDir) and the Init Container exits.
The main container reads the token from the token file. Now the main container can use the token to retrieve all the secrets allowed by the policies considered when the token was created.
If needed, the main container renews the token to keep it from expiring.

Resources