I have gotten a Redhat OpenShift online starter vps, for hosting my discord bot. I've uploaded it to github, minus my discord token and other API keys, of course :^)
How would I get OpenShift to use store and read client secrets?
I'm using the nodejs8 framework if that helps.
Secrets have no place in a source version control hosting service like GitHub.
Regarding OpenShift, it includes Secrets, an encoded-64 configmap in which you can inject confidential information.
But that long-term confidential information storage (to be injected in OpenShift secrets) ought to be stored in a proper Vault.
Like, for instance, the Hashicorp Vault, as described by the article "Managing Secrets on OpenShift – Vault Integration"
The rest describes that solution, but even if you don't use that particular host, the general idea (an external vault-type storage) remains:
An Init Container (run before the main container of a pod is started) requests a wrapped token from the Vault Controller over an encrypted connection.
Wrapped credentials allow you to pass credentials around without any of the intermediaries having to actually see the credentials.
The Vault Controller retrieves the pod details from the Kubernetes API server.
If the pod exists and contains the vaultproject.io/policies annotation, the Vault Controller calls Vault and generates a unique wrapped token with access to the Vault policies mentioned in the annotation. This step requires trust on pod author to have used to right policies. The generated token has a configurable TTL.
The Vault Controller “calls back” the Init Container using the pod IP obtained from the Kubernetes API over an encrypted connection and delivers it the newly created wrapped token. Notice that the Vault Controller does not trust the pod, it only trusts the master API.
The Init Container unwraps the token to obtain a the Vault token that will allow access to the credentials.
The Vault token is written to a well-known location in a volume shared between the two containers (emptyDir) and the Init Container exits.
The main container reads the token from the token file. Now the main container can use the token to retrieve all the secrets allowed by the policies considered when the token was created.
If needed, the main container renews the token to keep it from expiring.
Related
I'm deploying Jenkins via Helm with JCasC building up our application. I'm trying to use an AzureKeyVault to set secret values into my Config As Code, such as credentials. I'm configuring the Azure KeyVault via Config as Code as well, and am injecting the Client Secret using kubernetes secrets.
However, the problem I am running into is that the Config As Code plugin isn't using the Azure Keyvault as a secret source to inject the secret values. It goes through my JCasC yaml, states that there are no values in my self-defined secrets location (I mounted the Azure Client Secret as a volume in my deployment). Only when it gets to the config for the AzureKeyvault will it inject the client Secret from the mounted secrets, and then it finishes. All my other credentials are null, but the AzureKeyVault is connected.
My question is, do I need to define the AzureKeyVault connection very early on in my casc yaml? Is there something I can set to notify the casc plugin that I want to use an AzureKeyVault as my secret source, and the credentials are located in the casc file itself
I have integrated the external vault into kubernetes cluster. Vault is injecting the secrets into shared volume “/vault/secrets” inside the pod which can be consumed by application container. Till now everything looks good.
But I can see security risk by inserting the secrets into shared volume in plain text as anyone can access the application secrets who has access to the kubernetes cluster.
Example: Secrets are injected into shared volume at /vault/secrets/config
Now, If kubernetes cluster admin logged in and he can access the pod along with credentials available at the shared volume in plain text format.
Kubectl exec -it <pod> command will be used to enter into pod.
In this case, my concern is cluster admin can access the application secrets (Ex: database passwords) which is security risk. In my scenario vault admin is different and kubernetes cluster admin is different.
Having a shared volume available to all pods in a cluster where all the secrets are stored in plain-text doesn't sound too secure to be honest. You could improve the securtity a little bit (only a little bit) by defining the use-limit (num_uses token attribute) to 1 (one) and alerting whenever legitimate application (that is the one that the secret was intended for) gets token invalid error messege.
I'm a K8s noob but how about this guide:
https://cloud.redhat.com/blog/integrating-hashicorp-vault-in-openshift-4
I know it's for RH OSE but maybe the concept sparks an idea.
In Azure you can setup an App Config and a KeyVault. The point of the KeyVault being to store more sensitive data than your App Config and be able to regulate access to the config and vault separately.
So what is the benefit of using a keyvault reference in the app config?
You are basically allowing anyone with access to the app config to access certain values in your keyvault and are bypassing the additional layer of security the vault normally provides.
The additional layer being required auth to the vault to access those same values if they aren't referenced in the config.
I really don't understand what benefit keyvault references give you.
This blog article by Jan de Vries explains them in more detail: https://jan-v.nl/post/2021/using-key-vault-with-azure-app-configuration/.
The relevant part for your question:
As it happens, the code for accessing App Configuration doesn’t give your application permission to retrieve secrets from Key Vault.
The application retrieves them from Key Vault, not from App Configuration.
App Config only holds the reference, not the actual value.
Official docs also mention this:
Your application uses the App Configuration client provider to retrieve Key Vault references, just as it does for any other keys stored in App Configuration. In this case, the values stored in App Configuration are URIs that reference the values in the Key Vault. They are not Key Vault values or credentials. Because the client provider recognizes the keys as Key Vault references, it uses Key Vault to retrieve their values.
Your application is responsible for authenticating properly to both App Configuration and Key Vault. The two services don't communicate directly.
I suppose there are different approaches to using the KeyVault, but the way I tend to use it is as follows.
My application will have a set of secrets, which I store locally using the Secrets Manager, you would add the secret for your application:
dotnet user-secrets set "Movies:ServiceApiKey" "12345"
Your application can then read this setting using _moviesApiKey = Configuration["Movies:ServiceApiKey"]; as you'll see in the link above. Obviously, there's no way you can see this value in the code, but your application can read it from the Secrets Manager.
If you do forget the values, you can use the following command to retrieve them:
dotnet user-secrets list
KeyVault will work as your Secrets Manager within Azure. So, your application will need to have permission to access the KeyVault, and in my case I store the Vault name in the appsettings.json, and during the bootstrapping, I include the KeyVault configuration if running in Production mode i.e. on the Azure Server and not locally.
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.AddConsole();
logging.AddAzureWebAppDiagnostics();
})
.ConfigureAppConfiguration((context, config) =>
{
if (context.HostingEnvironment.IsProduction())
{
IConfigurationRoot builtConfig = config.Build();
ConfigurationBuilder keyVaultConfigBuilder = new ConfigurationBuilder();
keyVaultConfigBuilder.AddAzureKeyVault(builtConfig["VaultName"]);
IConfigurationRoot keyVaultConfig = keyVaultConfigBuilder.Build();
config.AddConfiguration(keyVaultConfig);
}
})
.UseStartup<Startup>();
Note, the check for context.HostingEnvironment.IsProduction(). Within the appsettings, I have:
"VaultName": "https://yourkvname.vault.azure.net/"
So, the only reference I have to the KeyVault from the application is the name, and that should be secure as only the application will have access to the keys/secrets.
One thing to note, you need to make sure that the names match both for your local secrets and the ones in the KeyVault. In my case, I am running on a Windows platform, so I needed to make a small change to the names using double dashes (--) in place of the colon (:), so...
Movies:ServiceApiKey
Becomes
Movies--ServiceApiKey
When working in Azure, storing secrets in Key Vault is a good idea. And to make it better, there’s the Key Vault Reference notation. This feature makes sure no one can read the secret(s) unless someone grants permission.
Speaking of secrets, they should never be directly stored in application settings of a Function App (same goes for App Services by the way). Why not ? Because secrets would be available to anyone who has access to the Function App in the Azure Portal. The right way is to use an Azure Key Vault which is the Azure component for securely storing and accessing secrets 🔒. Once your secrets are in the key vault, you have to grant the Key Vault access to the identity of your Function App and you can then reference the secrets you need directly in your application settings. These are called Key Vault references because an application setting does not contain directly the value of a secret but a reference to the secret which is stored in Key Vault. When running, your function will automatically have access to the secret and its value as an environment variable, as if it was a normal application setting.
Key Vault references work for both App Services and Function Apps and are particularly useful for existing applications that have their secrets stored in settings because securing the secrets with Azure Key Vault references does not require any code change.
Reference: https://www.techwatching.dev/posts/azure-functions-custom-configuration
https://www.sharepointeurope.com/using-key-vault-references-with-azure-app-configuration/
Please let me know how to backup Azure WAF Resource Configuration/settings. I am not able find anything related to waf backup setting/configuration.
Go to Application Gateway -> Export Template
The template contains all configuration/settings for your WAF
I appreciate this question is a little old now - the answer already given and accepted is technically valid but note that any export of an Application Gateway you do that uses "SSL Certificates" will not include the certificates in the export.
So if you were to try and re-create the Application Gateway from the exported JSON, it will probably not work and may need additional configuration to re-import the SSL certificates referenced by listeners.
For information, you can also use the Export-AzResourceGroup cmdlet to export your App Gateway but this too does not include the SSL certificates.
A fairly recent (end of 2020) addition which alleviates this to some extent is the ability to pass a reference to a Key Vault certificate but this requires a User Assigned Managed Identity to be associated with the Application Gateway which has permissions to access the certificates and secrets (if they had passwords when added to the Key Vault).
Create a user-assigned managed identity - look for Managed Identities in the Azure Portal.
Configure your key vault - this basically means add permissions to the Key Vault for the Managed Identity to be able to manage certificates and read secrets.
Configure the application gateway. Use the Set-AzApplicationGatewayIdentity cmdlet to assign the UA-MI account's ID to the Application Gateway.
The instructions above are abbreviated from this article (which I'll get grief for adding, but I'm not regurgitating documentation here for the sake of it): https://learn.microsoft.com/en-us/azure/application-gateway/key-vault-certs
Remember that I said the key vault integration alleviates the issue to some extent. Using references still doesn't resolve the issue completely as I believe there are still some pieces of secret information omitted from exports which would be needed to fully re-create the Application Gateway in a working state.
Issue:
Mounted ADLS gen2 container using service principal secret as secret from Azure Key Vault-backed secret scope. All good, can access the data.
Deleted secret from service principal in AAD, added new, updated Azure Key Vault secret (added the new version, disabled the old secret). All was still good, could access the data.
Restarted cluster. Unable to access mount point, error: “AADToken: HTTP connection failed for getting token from AzureAD. Http response: 401 Unauthorized”
Unmount/mount using the same config helped.
Is there a way to refresh the secret used for mount point that I could add to init scripts to avoid this issue? I would rather avoid unmounting/mounting all mount points in init scripts and was hoping that there is something like dbutils.fs.refreshMounts() that would help (refreshMounts didn't help with this particular issue).
I mounted ADLS Gen2 using service principal, oauth2.0, and azure key vault-backed secret scope, following this documentation: https://learn.microsoft.com/en-us/azure/databricks/data/data-sources/azure/azure-datalake-gen2#mount-azure-data-lake-gen2
Also - out of curiosity: does anybody know how long a token to mount to ADLS Gen2 lives? As long as the cluster did not restart, I was able to access my mnt even though the secret was deleted and updated (i.e., secret was updated in AAD and Key Vault; no failures until restarting the cluster - which was more than 12 hours after the update).
This is a known limitation. Whenever you create a mount point using credentials coming from an Azure Key Vault backed secret scope, the credentials will be stored in the mount point and will never be refreshed again.
This is a one-time read activity on mount point creation time. So each time you rotate credentials in Azure Key Vault you need to re-create the mount points to refresh the credentials there.
I would suggest you to provide feedback on the same:
Azure Databricks - Feedback
All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.