Access secret environment properties in IBM cloud deploy - NodeJS - node.js

I'm having some problem with accessing my secret environments properties I've set in my build stage. In the build environment properties I got two secret fields called "w_username" and "w_password", however, I can not access these properties inside of my NodeJS runtime. I've tried with process.env['w_username'] but it seems like it can't find it. How is it possible to access them?
Using NodeJS 6.x, npm 6.x with SDK for NodeJS on IBM cloud.

You can directly access the build environment properties in the next stage in the toolchain with their names like w_username and w_password.
You can examine the environment properties for a pipeline job by
running the env command in the job's script.
You can also define your own environment properties. For example, you might define an API_KEY property that passes an API key that is used to access IBM Cloud resources by all scripts in the pipeline.
You can add the following types of properties:
Text: A property key with a single-line value.
Text Area: A property key with a multi-line value.
Secure: A property key with a single-line value that is secured with AES-128 encryption. The value is displayed as asterisks.
Properties: A file in the project's
repository. This file can contain multiple properties. Each property
must be on its own line. To separate key-value pairs, use the equals
sign (=). Enclose all string values in quotation marks. For example,
MY_STRING="SOME STRING VALUE".
For more information, refer here
Hope this helps

Related

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Node-Red mongodb3 connect DB using URL from environment variable

I'm running Node-Red embedded in Express application. Also using 'dotenv' to load environment variables.
For storage using MongoDB with 'node-red-contrib-mongodb3'.
Everything works as expected. But, I have different environments and different MongoDB for each environments.
I want to connect to MongoDB from configuration (.env file or environment file).
Something like, in MongoDB config node URL input box golbal.get('env').MONGODB_DEV_URL or msg.MONGODB_URL
Tried looking for an option in the documentation of 'mongodb3' and google, still no luck. Any help or direction will be appreciated.
From the Node-RED docs
Any node property can be set with an environment variable by setting
its value to a string of the form ${ENV_VAR}. When the runtime loads
the flows, it will substitute the value of that environment variable
before passing it to the node.
This only works if it replaces the entire property - it cannot be used
to substitute just part of the value. For example, it is not possible
to use CLIENT-${HOST}.

Substitute Service Fabric application parameters during deployment

I'm setting up my production environment and would like to secure my environment-related variables.
For the moment, every environment has its own application parameters file, which works well, but I don't want every dev in my team knowing the production connection strings and other sensitive stuffs that could appear in there.
So I'm looking for every possibility available.
I've seen that in Azure DevOps, which I'm using at the moment for my CI/CD, there is some possible variable substitution (xml transformation). Is it usable in a SF project?
I've seen in another project something similar through Octopus.
Are there any other tools that would help me manage my variables by environment safely (and easily)?
Can I do that with my KeyVault eventually?
Any recommendations?
Thanks
EDIT: an example of how I'd like to manage those values; this is a screenshot from octopus :
so something similar to this that separates and injects the values is what I'm looking for.
You can do XML transformation to the ApplicationParameter file to update the values in there before you deploy it.
The other option is use Powershell to update the application and pass the parameters as argument to the script.
The Start-ServiceFabricApplicationUpgrade command accept as parameter a hashtable with the parameters, technically, the builtin task in VSTS\DevOps transform the application parameters in a hashtable, the script would be something like this:
#Get the existing parameters
$app = Get-ServiceFabricApplication -ApplicationName "fabric:/AzureFilesVolumePlugin"
#Create a temp hashtable and populate with existing values
$parameters = #{ }
$app.ApplicationParameters | ForEach-Object { $parameters.Add($_.Name, $_.Value) }
#Replace the desired parameters
$parameters["test"] = "123test" #Here you would replace with your variable, like $env:username
#Upgrade the application
Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/AzureFilesVolumePlugin" -ApplicationParameter $parameters -ApplicationTypeVersion "6.4.617.9590" -UnmonitoredAuto
Keep in mind that the existing VSTS Task also has other operations, like copy the package to SF and register the application version in the image store, you will need to replicate it. You can copy the full script from Deploy-FabricApplication.ps1 file in the service fabric project and replace it with your changes. The other approach is get the source for the VSTS Task here and add your changes.
If you are planning to use KeyVault, I would recommend the application access the values direct on KeyVault instead of passing it to SF, this way, you can change the values in KeyVault without redeploying the application. In the deployment, you would only pass the KeyVault credentials\configuration.

How can I set environment variables for dredd testing in a dredd.yml file?

I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}

What is the laravel way of storing API keys?

Is there a specific file or directory that is recommended for storing API keys? I'd like to take my keys out of my codebase but I'm not sure where to put them.
This is an updated answer for newer versions of Laravel.
First, set the credentials in your .env file. Generally you'll want to prefix it with the name of the service, so in this example I'll use Google Maps.
GOOGLE_KEY=secret_api_key
Then, take a look in config/services.php - it's where we can map environment variables into the app configuration. You'll see some existing examples out of the box. You can add additional configuration under the service name and point it to the environment variable.
'google' => [
'key' => env('GOOGLE_KEY'),
],
Then when you need to access this key within your app you can get it through the app configuration instead.
// Through a facade
Config::get('services.google.key');
// Through a helper
config('services.google.key');
Be sure not to just use env('GOOGLE_KEY) through your app - it's more performant to go through the app configuration as it's cached - especially if you call php artisan config:cache as part of your deployment process.
You can make your API keys environment variables and then access them that way. Read more about protecting sensitive configuration from the docs.
You simply create a .env.php file in the root of your project that returns an array of environment variables.
<?php
return array(
'SECRET_API_KEY' => 'PUT YOUR API KEY HERE'
);
Then you can access it in your app like so.
getenv('SECRET_API_KEY');

Resources