Substitute Service Fabric application parameters during deployment - azure

I'm setting up my production environment and would like to secure my environment-related variables.
For the moment, every environment has its own application parameters file, which works well, but I don't want every dev in my team knowing the production connection strings and other sensitive stuffs that could appear in there.
So I'm looking for every possibility available.
I've seen that in Azure DevOps, which I'm using at the moment for my CI/CD, there is some possible variable substitution (xml transformation). Is it usable in a SF project?
I've seen in another project something similar through Octopus.
Are there any other tools that would help me manage my variables by environment safely (and easily)?
Can I do that with my KeyVault eventually?
Any recommendations?
Thanks
EDIT: an example of how I'd like to manage those values; this is a screenshot from octopus :
so something similar to this that separates and injects the values is what I'm looking for.

You can do XML transformation to the ApplicationParameter file to update the values in there before you deploy it.
The other option is use Powershell to update the application and pass the parameters as argument to the script.
The Start-ServiceFabricApplicationUpgrade command accept as parameter a hashtable with the parameters, technically, the builtin task in VSTS\DevOps transform the application parameters in a hashtable, the script would be something like this:
#Get the existing parameters
$app = Get-ServiceFabricApplication -ApplicationName "fabric:/AzureFilesVolumePlugin"
#Create a temp hashtable and populate with existing values
$parameters = #{ }
$app.ApplicationParameters | ForEach-Object { $parameters.Add($_.Name, $_.Value) }
#Replace the desired parameters
$parameters["test"] = "123test" #Here you would replace with your variable, like $env:username
#Upgrade the application
Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/AzureFilesVolumePlugin" -ApplicationParameter $parameters -ApplicationTypeVersion "6.4.617.9590" -UnmonitoredAuto
Keep in mind that the existing VSTS Task also has other operations, like copy the package to SF and register the application version in the image store, you will need to replicate it. You can copy the full script from Deploy-FabricApplication.ps1 file in the service fabric project and replace it with your changes. The other approach is get the source for the VSTS Task here and add your changes.
If you are planning to use KeyVault, I would recommend the application access the values direct on KeyVault instead of passing it to SF, this way, you can change the values in KeyVault without redeploying the application. In the deployment, you would only pass the KeyVault credentials\configuration.

Related

Unable to utilise variable passed from previous step in a Azure release pipeline

I have an Azure release pipeline that I am using for a Windows Virtual Desktop deployment.
I have created a simple Powershell task to obtain some information about the existing session hosts available and capture it to a list which I have then set to a dynamic variable using
$sessions = "vm_name1 vm_name2 vm_name3"
write-host "##vso[task.setvariable variable=sessions;isOutput=true;]$sessions"
I can then retrieve this fine in later tasks by using
write-host $(taskreference.sessions)
result: vm_name1 vm_name2 vm_name3
However, I need to access be able to parse this variable $(sessions) to obtain the individual host names to be used in later steps of my release pipeline.
i.e. $vms = $(sessions).Split(" ")
$vms[0]
$vms[1]
etc etc
I have tried various methods to access and expand this variable but ultimately am struggling to get anything working as it will always return null. When using a Azure CLI task, I am able to successfully access the variable using the exact same code. I suspect this is something to do with how the tasks work during run time/compile time.
Is there any way that I could parse the variable properly to obtain the individual host names?
The one you described would work fine if the variable is passed in as a string parameter.
Else this might work.
$vms = "$(sessions)".Split(" ")
$vms[0]
$vms[1]

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Access secret environment properties in IBM cloud deploy - NodeJS

I'm having some problem with accessing my secret environments properties I've set in my build stage. In the build environment properties I got two secret fields called "w_username" and "w_password", however, I can not access these properties inside of my NodeJS runtime. I've tried with process.env['w_username'] but it seems like it can't find it. How is it possible to access them?
Using NodeJS 6.x, npm 6.x with SDK for NodeJS on IBM cloud.
You can directly access the build environment properties in the next stage in the toolchain with their names like w_username and w_password.
You can examine the environment properties for a pipeline job by
running the env command in the job's script.
You can also define your own environment properties. For example, you might define an API_KEY property that passes an API key that is used to access IBM Cloud resources by all scripts in the pipeline.
You can add the following types of properties:
Text: A property key with a single-line value.
Text Area: A property key with a multi-line value.
Secure: A property key with a single-line value that is secured with AES-128 encryption. The value is displayed as asterisks.
Properties: A file in the project's
repository. This file can contain multiple properties. Each property
must be on its own line. To separate key-value pairs, use the equals
sign (=). Enclose all string values in quotation marks. For example,
MY_STRING="SOME STRING VALUE".
For more information, refer here
Hope this helps

Introduce new data source in Terraform

I am new to Terraform and have been trying to understand the constructs of the same. Let's say i have a service which exposes REST API's and i want to call those REST API's as part of my terraform script, what are the steps i need to take ?
My understanding is that i need to write a custom provider but i am unable to connect the dot's on how to add new data source type for the new provider.
Also, assuming that we do have the required provider, whats the protocol that would be used for communicating with my service ? Is it HTTP/s ?
One more point to note is that my service currently is used for configuring storage in the backend.
Recent versions of terraform ( > 0.9 I believe) support external data sources. You don't have to create a custom provider. You can call any arbitrary shell or python script that return values that you can use as data.
data "external" "example" {
program = ["python", "${path.module}/example-data-source.py"]
query = {
# arbitrary map from strings to strings, passed
# to the external program as the data query.
id = "abc123"
}
}
In your case you could use a simple curl in a bash script to call your endpoint and return data to terraform as a map of strings.
Do note the warnings a the top of that page.
This is considerably more difficult then it appears; it is impossible to debug the interaction between what terraform is sending to my script and what the script is expecting. It just fails to parse the arguments and refuses to provide me any feedback as to what is getting into the program

Resources