Ways to manage secrets in Hashicorp Vault - security

guys!
I've just started to use Vault for our applications, and I've noticed few areas for improvement.
So I want to gather more information how it could be done.
I used kv secret engine and have created next path structure:
/{env}/services/{service}
env could be sand/stage/prod1/prod2...
services could be app1/app2/app3...
The first problem is - secrets duplications.
For example:
app1 and app2 need to communicate between each other and they have keys/secrets to do that. So I need to go and add keys/secrets in two places at least: /{env}/services/app1 and /{env}/services/app2
I came up with solution as master-slave secrets. For example:
Create /{env}/common secret and gather all duplications in this place.
Create /{env}/nested secret and set there secrets as common secret key and paths to services: APP1_USERNAME=/{env}/services/app1,/{env}/services/app2
Create l-function to fetch /{env}/common secrets by keys from /{env}/nested and write them to paths defined in /{env}/nested secret values
Trigger function once an hour/two/three...
The second problem is - how to automate secrets adding, or how to notify developers at least.
For example: Developer finished to develop new feature that should use new configuration parameter, and forgot to push it into vault.
Pull request was successfully reviewed and merged, new code deployed and doesn't started due to lack of parameters.
I have some thoughts to use gitHub actions for code scanning and checking secrets in vault.
Is there any practices to handle such situations?

Related

Is it safe to put in secrets inside Google App Script code?

I'm creating a Google Workspace Add-On and need to make some requests using OAuth. They provide a guide here explaining how to do so. In the sample code, it's suggested that the OAuth client secret be inline:
function getOAuthService() {
return OAuth2.createService('SERVICE_NAME')
.setAuthorizationBaseUrl('SERVICE_AUTH_URL')
.setTokenUrl('SERVICE_AUTH_TOKEN_URL')
.setClientId('CLIENT_ID')
.setClientSecret('CLIENT_SECRET')
.setScope('SERVICE_SCOPE_REQUESTS')
.setCallbackFunction('authCallback')
.setCache(CacheService.getUserCache())
.setPropertyStore(PropertiesService.getUserProperties());
}
Is this safe for me to do?
I don't know how Google App Script is architected so I don't have details on where and how the code is being run.
Most likely it is safe since the script is only accessible to the script owner and Workspace Admins if it is for Google workspace (which may or may not be an issue).
Well, you can add some security/safety by making use of a container, by using Container-bound script which makes use of Google Spreadsheet, Google Doc or any other that allows user interaction. Or a standalone script but also makes use of other way to connect to UI for interaction. Refer to this link for more detailed explanation on that: What is the appropriate way to manage API secrets within a Google Apps script?
Otherwise, the only way I see that you can do is store the keys and secrets in User Properties. Here's how you can do it: Storing API Keys and secrets in Google AppScript user property
Also you can refer to this link below for more general information on how you can manage or add some security: https://softwareengineering.stackexchange.com/questions/205606/strategy-for-keeping-secret-info-such-as-api-keys-out-of-source-control

Github actions + Azure OIDC with "subject" value for any branch

I'm using Github Actions to build some Docker images that I want to push to Azure Container Registry. I am attempting to use OIDC as an auth mechanism, based on this GH Action. I know the action supports other auth strategies, which I have discarded for my use case for reasons.
According to GH docs the "subject" field needs to be populated based on the GH account, repo name and branch name. However, want to build Docker images for multiple branches, which seems to require one federation config per branch - not practical, IMO.
So my question is: does anyone know if it's possible (and how) to set up a single federation config with a "subject" value that would work as a wildcard of sorts, covering all branches from a give repo?
thanks!
On AWS it is possible to use wildcards, like:
"repo:MY_ORG/MY_REPO:*"
but that doesn't seem to work on Azure, you can enter a wildcard in Azure Federated Credentials, but the GitHub workflow fails. To actually need a branch is crazy, as we'd need to setup new credential config for each new git branch.
I worked around the issue by using GitHub environments. I set an environment (called main but it can be called anything) and then set my workflow like this:
jobs:
test:
runs-on: ubuntu-latest
environment: main
and then in Azure set the federated creds to use:
Entity of Environment rather than Entity of Branch
This will then work for any branch - but clearly if you use GitHub environments for other reasons this may not be viable.
Note that, since Oct. 2022:
GitHub Actions:OpenID Connect support enhanced to enable secure cloud deployments at scale (Oct. 2022)
OpenID Connect (OIDC) support in GitHub Actions enables secure cloud deployments using short-lived tokens that are automatically rotated for each deployment.
You can now use the enhanced OIDC support to configure the subject claim format within the OIDC tokens, by defining a customization template at either org or repo levels.
Once the configuration is completed, the new OIDC tokens generated during each deployment will follow the custom format.
This enables organization & repository admins to standardize OIDC configuration across their cloud deployment workflows that suits their compliance & security needs.
Learn more about Security hardening your GitHub Workflows using OpenID Connect.
That means, from the documentation:
Customizing the subject claims for an organization or repository
To help improve security, compliance, and standardization, you can customize the standard claims to suit your required access conditions.
If your cloud provider supports conditions on subject claims, you can create a condition that checks whether the sub value matches the path of the reusable workflow, such as "job_workflow_ref: "octo-org/octo-automation/.github/workflows/oidc.yml#refs/heads/main"".
The exact format will vary depending on your cloud provider's OIDC configuration. To configure the matching condition on GitHub, you can can use the REST API to require that the sub claim must always include a specific custom claim, such as job_workflow_ref.
You can use the OIDC REST API to apply a customization template for the OIDC subject claim; for example, you can require that the sub claim within the OIDC token must always include a specific custom claim, such as job_workflow_ref.

Security on azure Cosmos db

I want to use Cosmos db with c# code. A really important point is that data should stay encrypted at any point. So, as I understood, once the data on the server, it's automaticaly encrypted by azure by the encryption-at-rest. But during the transportation, do I have to use certificate or it's automatically encrypted. I used this link to manage the database https://learn.microsoft.com/fr-fr/azure/cosmos-db/create-sql-api-dotnet. My question is finally : Is there any risk of safety if I just follow this tutorial?
Thanks.
I think that's a great starting point.
Just one note, your data is only as secure as the access keys to the account so, on top encryption at rest and in transit, the Access Key is probably the most sensitive piece of information you need to protect.
My advice is to use a KeyVault to store the database access key rather than define them as environment variables. Combined with Managed Identity, your key will never leave the confines of the azure portal which makes it the most secure option. I'm not sure how you plan on deploying your code but more times than not I've seen those keys encoded in source code or in some configuration file that ends up exposed.
A while ago I wrote a step-by-step tutorial describing how to implement this. You can find my article here
I would suggest you to follow the instructions mentioned in here, and not even using access keys, because if they are accidentally exposed, no matter that you have stored them in a Key Vault or not, your database is out there. Besides, if you want to use access keys, it is recommended to change the access keys periodically, which then you need to make this automatic and known to your key vault, here it is described how you could automate that.

Decrypt Azure Function App Operation Secret

I'm looking to get at an Azure function app's list of operational endpoints for each function, in particular the secret code that needs to be passed in to invoke the function.
I've tried lots of current answers in SO but all only seem to work with Function App's which use Files as the secret storage type.
We have a requirement to use Blob storage which is also the default in V2 function apps.
What I'm really after is the code piece that comes after the function name when it's retrieved from the Azure portal, I can manufacture all the other pieces before that myself.
For example https://mytestfunapp-onazure-apidev03.azurewebsites.net/api/AcceptQuote?code=XYZABCYkVeEj8zkabgSUTRsCm7za4jj2OLIQWnbvFRZ6ZIiiB3RNFg==
I can see where the secrets are stored in Azure Blob Storage as we need to configure that anyway when we create all the resources in our scripts.
What I'm really look for is how to decrypt the secret stored in the file. I don't care what programming language or script the solution may be written in, I'll work with it, or convert it to another language that we can use.
Here's a snippet of what the stored secret looks like in Blob storage, it's just a JSON file.
I'm wondering if anyone out there has some experience with this issue and may be able to help me out.
For now it's not supported to get the true key value programmatically. you could just view your key or create new key in the portal. You could find the description here: Obtaining keys.
If your function is a WebHook, when using a key other than the default you must also specify the clientId as a query param (the client ID is the name of your new key):
https://<yourapp>.azurewebsites.net/api/<funcname>?clientid=<your key name>
More information refer to this wiki doc: WebHooks.

Is creating linked service in data factory has been changed? There are not two options of connection string and keyvault anymore

I created a linked service in data factory using keyvault option about some months ago. I wanted to create a new linked service some days ago and I understood the UI for linked service creation has been changed!
Previously based on this article https://learn.microsoft.com/en-us/azure/data-factory/store-credentials-in-key-vault#azure-key-vault-linked-service there were two option 1. connection string (needed DB name, server name and username and password for DB)2.KeyVault(Just needed secret name and keyvault connection).
While now those two options has been changed to 1.password 2. Keyvault. and the weird part is that in both two options DB name, username and password are mandatory! which is not acceptable because the point of using keyvault is not to share DB properties with developers and just sharing the secret name.
Does someone have any opinion about it??
You can edit the json code of linked service to make it reference connection string.
Here's the format, then click finish button, it will be published.
Yes. It has been changed. Now you only need put your password into your azure Keyvault.
You old linked service will still work. But the new UI will only support password only azure Keyvault.

Resources