Take the following case as example.
You've a RESTful API layer secured using OAuth2. In the other hand, to let users authenticate against your APIs, you need to request an access token (i.e. grant_type=password).
In order to request a password access token, client app requires an OAuth Client (key+secret pair).
Now you've configured everything to use continuous integration and continuous deployment.
During a development build, the build script creates test data, including OAuth clients. Obviously, if a build creates test data, it previously drops all data created during automated tests.
So you'll want your client app to use one of OAuth clients and you want to avoid hardcoding one of them, because they're created using the API infrastructure, so they're re-created from scratch on each build.
Think that front-end and back-end are built by different build scripts.
Conclusion & question
What would be a good approach to share secrets between the server and client infrastructure, so both get up and running synchronized with the same security secrets?
Some ideas
Operating system environment variables. I could store those secrets on build machine environment variables. That is, client infrastructure will be always built and deployed with most up-to-date secrets.
Same as #1, but storing those secrets on a shared directory in the build machine.
Regarding TFS/VSTS build (TFS 2015 or later)/release (TFS 2017 or VSTS) system, you just need to check Allow Scripts to Access OAuth Token option in Options/General tab of build definition or release environment, then you can fetch Access OAuth Token by using $(System.AccessToekn) in each task.
Regarding other system, the better approach is to store the access token in system environment variables and remove it at the end, which is similar to the share variable value for other build/release tasks by using "##vso[task.setvariable variable=testvar;]testvalue" (PowerShell) in TFS or VSTS.
On the other hand, you can store encrypted Access Token in system environment for security, then decrypt and use it.
Finally I've ended up with the common build directory to store a JSON file with latest credentials approach where both builds can access it. Each backend build run persists a JSON file containing the whole credentials, and frontend build relies on the whole file.
Anyway, I tried the environment variables' approach, but since both builds are running on the same TFS build agent, client build couldn't see environment variables' changes unless the whole agent service would be restarted.
Related
I am trying to set up the development environment for a project that uses an API hosted in GCP. We are using the Google Auth Library: Node.js Client, and it tries to pull an ID token automatically, and fails. This is the error:
Error: Cannot fetch ID token in this environment, use GCE or set the GOOGLE_APPLICATION_CREDENTIALS environment variable to a service account credentials JSON file.
So, I've solved this by manually downloading a service account key and pointing the GOOGLE_APPLICATION_CREDENTIALS environment variable to it. However, when more developers start to work on this project, it would be great to have a somewhat more automatic or streamlined solution.
I've been reading around, and was hoping that setting the GOOGLE_APPLICATION_CREDENTIALS to the key file generated by gcloud auth application-default login would do the trick. But, it seems like the library doesn't work with user credentials? At least it doesn't work when I try it.
Having a way where the developer setting up the project in development would either simply authenticate with Google in the terminal, or point the GOOGLE_APPLICATION_CREDENTIALS to a file generated by a gcloud command would be great, instead of having the person go into GCP to download a service account key.
Is this possible somehow? It's been a little tricky to find out. Thanks!
Some other questions I've seen:
Local development without using google service account key
Could not load the default credentials? (Node.js Google Compute Engine tutorial)
In my Gitlab Repo, I have to run a scheduled JOB which triggers a Pipeline. And this pipeline deletes the old JOB Logs using Gitlab API.
But this API calls needs the Gitlab AccessToken to perform the operation. Initially I though of using CI_JOB_TOKEN variable, which is auto-generated token, but it has no access to Gitlab APIs.
Alternatively I can store Project AccessToken as a Variable in my Schedule Job. But it will be visible to other people also in Project with Maintainer or Owners roles.
Is there any other way, where either I can store my tokens without reveling it to others? Or some mechanism where I can make it run without passing my Project AccessTokens?
Your best bet would be to store the secret in a vault/cloud service, such as HashiCorp Vault, AWS Secrets Manager, Azure Vault, etc. GitLab has the CI_JOB_JWT_V2 token, which can be used to authenticate to cloud services. With this method, you do not need to store any secrets in GitLab at all.
You can also see the Vault integration as another option.
The only other option might be to use a runner that has the secret on the system and lock that runner to your project.
I’m using a hosting website to host my discord bot and my .env stores the token. How does it still work when the file is .gitignored? Because I don’t want people stealing my token and using it for other purposes.
Your initial deployment process on your hosting needs to be more complex than "Pull the application from my Git repository".
For simple applications that generally just means you create the .env file on the hosting manually.
For complex systems (e.g. when you have multiple instances of the application on different servers) you'll generate it from a secure data store as part of a process that involves a deployment tool like Terraform.
You use gitignore and add the .env extension in it to make sure that it does not get pushed to the remote repository on github so that no-one can access those variables. in order to add the .env variables on a hosting website, you need to add the environment variables externally on that hosting site. The method depends entirely on the service provider.
I am trying to integrate KeyVault into my Azure App service. I have a KeyVault client library embedded in my application. In order for this client library to connect to KeyVault and access stored secrets, some configurations must be available for the client to connect. There are 4 types of credential objects that the client attempts to use, in a specific order, during initialization for authentication/authorization. The first credential object it tries to use is an environment based object. This object attempts to gather 4 environment variables from the hosting system to initialize the KeyVault client. One of these variables must contain the ClientSecret of the application trying to connect to KeyVault via the client lib. The problem I am running into is this. In my azure release pipeline I am trying to set the environment variables of the deployed host appropriately for the application to use. However, it appears that the release tasks all run on the same host, until you get to the actual deployment task of the app service. Apparently this task runs on a different host? When running the hostname command on previous tasks they all returned one hostname while the hostname command added to the deployment task returned a another. I am a little stuck and having trouble finding more clarity about setting environment variables for an app service through documentation. Does anyone have any ideas? Am I going about integrating KeyVault correctly or is there something I am missing? Please let me know if clarification is needed or more information is required to assist me. Thank you very much.
If you are using Azure App Services, this is way easier. You directly link application configuration from KeyVault using Managed Identities.
Sample config value will look like this:
#Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931)
This way you
don't have to change anything in your application code. The app reads
secrets from KeyVault just like any other configuration
do not need to manage any client-side credentials to access KeyVault.
You need to create the Variables in your Pipeline and retrieve them (from Key Vault), during the release process.
PS: Your app will receive/read them as Environment Variables.
Azure DevOps Variable Group not applying in Azure Function Configuration
What I want to accomplish:
I want to deploy an Azure Cloud Service via Release Management. I managed to get this working by following the steps outlined in this post. In the post the Azure publishsettings file is added to the project and used in Release Management to deploy the Azure package to a Cloud Service. So far so good.
What is the issue:
The Azure publishsettings file will also contain information about the production environment. I don't want that information to be available to all the developers and therefor I would like to have a more secure alternative.
What did I try:
I created a custom action which takes 3 arguments: subscription id, subscription name and certificate key. This way the Azure information stays in Release Management and can be passed to a script. This didn't work because the action is not shown in the Release Template Toolbox.
What is my question:
What is the best way to pass Azure credentials to a deployment script via Release Management on a secure manner?
We have a solution for Build today that will work for RM in the future.
Publish Settings file is an important one with which anybody can get access to certain activities. And once how ever the way you pass on the publish settings file, it can be misused (if tried).
So along with the publish settings file, you need to add a bit of process to the deployment like -
Inactive or remove the management certificate which will in turn invalidate the given publish settings and anyone should request for a new set of publish settings file before they actually start any release procedures.
Even though it adds a rough edge to your smooth flow of deployment process, as it is a live or production system, it is always better to tight the process and make it idiot proof.