Marking variables in variables groups as secrets makes them invisible - terraform

I have a case that marking variables as secrets loses their value in Release task, please allow me to elaborate further.
Please find below screenshot of Terraform Service principal
The above one works as variables are available in pipeline.
Take scenario, where they are secret and locked.
Now, run the pipeline and it reports required variable not set.
I have added a step to echo these variables, just to see if I can see them, here is the Release task:
I "assume" *** means actual echo, so they word in echo statement.
Not able to understand why the behavior is different:
When in plain text, they are available in pipeline
When marked as secret they are not available.
How to make them available in pipeline?
Updates
doing something like this:
Terraform plan -out main.plan -var "ARM_SUBSCRIPTION_ID=$(TF_VAR_ARM_SUBSCRIPTION_ID)" "ARM_CLIENT_ID=$(TF_VAR_ARM_CLIENT_ID)" "ARM_CLIENT_SECRET=$(TF_VAR_ARM_CLIENT_SECRET)" "ARM_TENANT_ID=$(TF_VAR_ARM_TENANT_ID)" It reports: 2019-03-07T00:21:19.7692360Z ##[command]"terraform" plan -out main.plan -var "ARM_SUBSCRIPTION_ID=***" "ARM_CLIENT_ID=***" "ARM_CLIENT_SECRET=***" "ARM_TENANT_ID=***" -input=false -no-color
get error
2019-03-07T00:21:19.8504985Z Too many command line arguments. Configuration path expected.

So to follow up with this, if make a variable a secret, you cannot access it from any script directly. What I do is that task that I need it decrypted is go to the environment variables section of the task and enter the following.
What this does is decrypts the variable and sets a variable of the same name so tools like Terraform can access it.

Related

Can I pass a variable from .env file into .gitlab-ci.yml

I'm quite new to CI/CD and basically I'm trying to add this job to Gitlab CI/CD that will run through the repo looking for secret leaks. It requires some API key to be passed there. I was able to directly insert this key into .gitlab-ci.yml and it worked as it was supposed to - failing the job and showing that this happened due to this key being in that file.
But I would like to have this API key to be stored in .env file that won't be pushed to a remote repo and to pull it somehow into .gitlab-ci.yml file from there.
Here's mine
stages:
- scanning
gitguardian scan:
variables:
GITGUARDIAN_API_KEY: ${process.env.GITGUARDIAN_API_KEY}
image: gitguardian/ggshield:latest
stage: scanning
script: ggshield scan ci
The pipeline fails with this message: Error: Invalid API key. so I assume that the way I'm passing it into variables is wrong.
CI variables should be available in gitlab-runner(machine or container) as environment variables, they are either predefined and populated by Gitlab like the list of predefined variables here, or added by you in the settings of the repository or the gitlab group Settings > CI/CD > Add Variable.
After adding variables you can use the following syntax, you can test if the variable has the correct value by echoing it.
variables:
GITGUARDIAN_API_KEY: "$GITGUARDIAN_API_KEY"
script:
- echo "$GITGUARDIAN_API_KEY"
- ggshield scan ci

Secret Variables in Azure Pipelines

I have an Azure pipeline that needs to access a secret token to contact another service. I've been following the documentation, but it does not seem to work as I'd expect. As a minimal example, I'm trying to write the variable in cachix_token to a file.
- bash: |
set -ex
mkdir -p packages
echo $CACHIX_AUTH_TOKEN > packages/token
env:
CACHIX_AUTH_TOKEN: $(cachix_token)
However, when I download the resulting the token file, the contents are a literal
$(cachix_token)
How do I get yaml to substitute in the secret variable?
Update
Below is a screenshot of where I've defined the secret variable for the pipeline.
As I eventually found, Azure Pipelines doesn't allow forks to access secret variables. So, even though the secret variable is defined for the pipeline, if you're performing a pull request off of a fork, instead of the main repo, then the secret variable is not defined and you'll see the behavior that I explained above.
As explained in the documentation, this can be bypassed through the Make secrets available to builds of forks checkbox in the gui for the pipeline. However, this does open a massive security hole where anyone can craft a malicious PR that gives them a verbatim copy of all your secrets.
If you have literal$(cachix_token) in file it means that Azure Pipeline was not able to replace that variable. As this it means that you don't have it defined anywhere. You may also confirm this using this extension - Print all variables.
Here you have a documentation how to set secret variable. However you can use also Azure Key Vault to store variables and then fetch values from it. Using built-in extension it also load them as secrets.
I repeat your steps:
steps:
- bash: |
set -ex
mkdir -p packages
echo $CACHIX_AUTH_TOKEN > packages/token
cat packages/token
env:
CACHIX_AUTH_TOKEN: $(cachix_token)
and got this:
+ mkdir -p packages
+ echo ***
+ cat packages/token
***
Which means that variable was correctly replaced.

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Azure Pipeline Unit Tests & Environment Variables

Struggling to see another question with an answer for this. I have the following code in a unit test (variable names changed). This information is used in my integration tests
var configuration = new ConfigurationBuilder()
.SetBasePath(Environment.CurrentDirectory)
.AddEnvironmentVariables()
.AddUserSecrets<MyTestTests>()
.Build();
var option= new Option();
option.x1 = configuration.GetValue<string>("Option:x1");
option.x2 = configuration.GetValue<string>("Option:x2");
option.x3 = configuration.GetValue<string>("Option:x3");
option.x3= configuration.GetValue<string>("Option:x4");
return option;
This works fine locally when my unit tests are running locally. However, when my integration tests run in an Azure Pipeline it is not picking up the environment variables.
I have created them in the format of
option__x1 where the _ is a double underscore.
If the Environment Variables are open then it works, however, if they are set as secret then it does not work.
Does anyone have any idea?
Azure Pipeline Unit Tests & Environment Variables
This behavior is by designed for protecting secret variables from being exposed in the task.
This documentation states that secret variables are:
Not decrypted into environment variables. So scripts and programs run by your build steps are not given access by default.
Decrypted for access by your build steps. So you can use them in password arguments and also pass them explicitly into a script or a
program from your build step (for example as $(password)).
That the reason why you could not use the secret variables in your task.
To resolve this issue, we need to explicitly map secret variables:
variables:
GLOBAL_MYSECRET: $(mySecret)
GLOBAL_MY_MAPPED_ENV_VAR: foo
steps:
- powershell: |
env:
MY_MAPPED_ENV_VAR: $(mySecret) # right way to map to an env variable
You could check this thread for and the document for some more details.
Hope this helps.

Substitute Service Fabric application parameters during deployment

I'm setting up my production environment and would like to secure my environment-related variables.
For the moment, every environment has its own application parameters file, which works well, but I don't want every dev in my team knowing the production connection strings and other sensitive stuffs that could appear in there.
So I'm looking for every possibility available.
I've seen that in Azure DevOps, which I'm using at the moment for my CI/CD, there is some possible variable substitution (xml transformation). Is it usable in a SF project?
I've seen in another project something similar through Octopus.
Are there any other tools that would help me manage my variables by environment safely (and easily)?
Can I do that with my KeyVault eventually?
Any recommendations?
Thanks
EDIT: an example of how I'd like to manage those values; this is a screenshot from octopus :
so something similar to this that separates and injects the values is what I'm looking for.
You can do XML transformation to the ApplicationParameter file to update the values in there before you deploy it.
The other option is use Powershell to update the application and pass the parameters as argument to the script.
The Start-ServiceFabricApplicationUpgrade command accept as parameter a hashtable with the parameters, technically, the builtin task in VSTS\DevOps transform the application parameters in a hashtable, the script would be something like this:
#Get the existing parameters
$app = Get-ServiceFabricApplication -ApplicationName "fabric:/AzureFilesVolumePlugin"
#Create a temp hashtable and populate with existing values
$parameters = #{ }
$app.ApplicationParameters | ForEach-Object { $parameters.Add($_.Name, $_.Value) }
#Replace the desired parameters
$parameters["test"] = "123test" #Here you would replace with your variable, like $env:username
#Upgrade the application
Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/AzureFilesVolumePlugin" -ApplicationParameter $parameters -ApplicationTypeVersion "6.4.617.9590" -UnmonitoredAuto
Keep in mind that the existing VSTS Task also has other operations, like copy the package to SF and register the application version in the image store, you will need to replicate it. You can copy the full script from Deploy-FabricApplication.ps1 file in the service fabric project and replace it with your changes. The other approach is get the source for the VSTS Task here and add your changes.
If you are planning to use KeyVault, I would recommend the application access the values direct on KeyVault instead of passing it to SF, this way, you can change the values in KeyVault without redeploying the application. In the deployment, you would only pass the KeyVault credentials\configuration.

Resources