Azure Pipeline Unit Tests & Environment Variables - azure

Struggling to see another question with an answer for this. I have the following code in a unit test (variable names changed). This information is used in my integration tests
var configuration = new ConfigurationBuilder()
.SetBasePath(Environment.CurrentDirectory)
.AddEnvironmentVariables()
.AddUserSecrets<MyTestTests>()
.Build();
var option= new Option();
option.x1 = configuration.GetValue<string>("Option:x1");
option.x2 = configuration.GetValue<string>("Option:x2");
option.x3 = configuration.GetValue<string>("Option:x3");
option.x3= configuration.GetValue<string>("Option:x4");
return option;
This works fine locally when my unit tests are running locally. However, when my integration tests run in an Azure Pipeline it is not picking up the environment variables.
I have created them in the format of
option__x1 where the _ is a double underscore.
If the Environment Variables are open then it works, however, if they are set as secret then it does not work.
Does anyone have any idea?

Azure Pipeline Unit Tests & Environment Variables
This behavior is by designed for protecting secret variables from being exposed in the task.
This documentation states that secret variables are:
Not decrypted into environment variables. So scripts and programs run by your build steps are not given access by default.
Decrypted for access by your build steps. So you can use them in password arguments and also pass them explicitly into a script or a
program from your build step (for example as $(password)).
That the reason why you could not use the secret variables in your task.
To resolve this issue, we need to explicitly map secret variables:
variables:
GLOBAL_MYSECRET: $(mySecret)
GLOBAL_MY_MAPPED_ENV_VAR: foo
steps:
- powershell: |
env:
MY_MAPPED_ENV_VAR: $(mySecret) # right way to map to an env variable
You could check this thread for and the document for some more details.
Hope this helps.

Related

Can I pass a variable from .env file into .gitlab-ci.yml

I'm quite new to CI/CD and basically I'm trying to add this job to Gitlab CI/CD that will run through the repo looking for secret leaks. It requires some API key to be passed there. I was able to directly insert this key into .gitlab-ci.yml and it worked as it was supposed to - failing the job and showing that this happened due to this key being in that file.
But I would like to have this API key to be stored in .env file that won't be pushed to a remote repo and to pull it somehow into .gitlab-ci.yml file from there.
Here's mine
stages:
- scanning
gitguardian scan:
variables:
GITGUARDIAN_API_KEY: ${process.env.GITGUARDIAN_API_KEY}
image: gitguardian/ggshield:latest
stage: scanning
script: ggshield scan ci
The pipeline fails with this message: Error: Invalid API key. so I assume that the way I'm passing it into variables is wrong.
CI variables should be available in gitlab-runner(machine or container) as environment variables, they are either predefined and populated by Gitlab like the list of predefined variables here, or added by you in the settings of the repository or the gitlab group Settings > CI/CD > Add Variable.
After adding variables you can use the following syntax, you can test if the variable has the correct value by echoing it.
variables:
GITGUARDIAN_API_KEY: "$GITGUARDIAN_API_KEY"
script:
- echo "$GITGUARDIAN_API_KEY"
- ggshield scan ci

How to use node package dotenv to access local development environment variables in Red Hat OpenShift application?

I'm revisiting a project which hasn't been updated for a while.
In production/online environment, it uses environment variables defined at:
openshift online console > applications > deployments > my node app > environment
In development/offline environment, it uses environment variables defined at:
./src/js/my_modules/local_settings (this file is ignored by .gitignore)
The code looks something like:
// check which environment we are in
if (process.env.MONGODB_USER) {
var online_status = "online";
}
else {
var online_status = "offline";
}
// if online, use environment variables defined in red hat openshift
if (online_status === 'online') {
var site_title = process.env.SITE_TITLE;
var site_description = process.env.SITE_DESCRIPTION;
//etc
}
// if offline, get settings from a local file
else if (online_status === 'offline') {
var local_settings = require('./src/js/my_modules/local_settings');
var site_title = local_settings.SITE_TITLE;
var site_description = local_settings.SITE_DESCRIPTION;
// etc
}
I would like to install the dotenv package in my local project repo via:
npm install dotenv
So that I can:
Have my local settings in a .env file in the root of my project (ignored in .gitignore)
Be able to use process.env.SOME_VARIABLE rather than local_settings.SOME_VARIABLE
Get rid of some if/else blocks as both scenarios would point to process.env.SOME_VARIABLE
I'm a bit confused as to how this would effect the online environment.
Seeing as both production/online and development/offline environments would use:
var some_variable = process.env.SOME_VARIABLE_HERE
would the application automatically know to:
Look at the local .env file when in development?
Look at the Red Hat environment variables when in production?
And would adding the required instantiation at the beginning of the server-side file:
require('dotenv').config()
somehow make Red Hat OpenShift freak out (as it seems to already have its own 'things' in place to resolve references to process.env.SOME_VARIABLE_HERE to the relevant values defined in the OpenShift console)?
To have a file by any environment (.dev .staging .prod) into the source code repository or manually in the server (it those are in .gitignore) worked for long time, but now it goes against to the devops.
The clean way is to use environment variables but managed remotely and obtained at the start of your application.
How it works?
Basically your apps don't read or need a file (.env .properties, etc) with variables anymore. It loads them from a remote http service.
Not intrusive
In this approach, you don't need specific languages variables (nodejs in your case). You just need to prepare your app to use environment variables. Your application don't care where the variables come from, just needs to be available at operative system level.
To achieve that, you just need to download the variables using a simple shell code or a very basic algorithm (http invocation) in your favorite language.
After that, after the start of your app, variables are ready to use at the most basic level.
var site_title = process.env.SITE_TITLE;
This approach is not intrusive because your app don't need something complex like library or algorithm in some programing language. Just needs the environment variables.
Intrusive
Same as previous alternative but instead to read the variables direct from environment system, you should use or create a class/module in your language. This offer your the variables you need:
var site_title = VariablesManager.getProperty("SITE_TITLE");
VariablesManager at the startup must have consumed the variables from a remote service (http) and the store them to offer them to whoever needs it through getProperty method.
Also this VariablesManager usually has a feature called hot-reload which at intervals, update the variables consuming the remote variables manager. With this, if your application is running in production with real users and some variable needs to be updated, you just need to change it in the variables manager. Automatically your app will load the new values, without restart or touching your app
This approach is intrusive because you need to load advanced libraries in some programing language or create it.
Devops
Your application just needs a few properties or settings related to the consume of remote variables. For example: variables of acme-web-staging:
remote_variables_manager = https://variables.com/api
application_id = acme-web-staging
secure_key = *****
You could hide the secure key and parametrize the application_id using environment variables (created in the platform console)
remote_variables_manager = https://variables.com/api
application_id = ${application_id}
secure_key = ${remote_variables_manager_key}
Or if you want one variable manager by each environment
staging
remote_variables_manager = https://variables-staging.com/api
application_id = acme-web
secure_key = *****
production
remote_variables_manager = https://variables-staging.com/api
application_id = acme-web
secure_key = *****
Variables manager
This concept was introduced many years ago. I used with java. It consist in a web application with features like:
secure login
create applications
create variables of an application
crypt sensitive values
publish http endpoints to download or query the variables by application
Here a list of some ready to use alternatives:
Configurator
Nodejs & mysql solution. I developed this and I use it in various projects.
Doppler
zookeeper
http://www.therore.net/java/2015/05/03/distributed-configuration-with-zookeeper-curator-and-spring-cloud-config.html
Spring Cloud
https://www.baeldung.com/spring-cloud-configuration
This is a java spring framework functionality in which you can create properties file with configurations and configure your applications to read them.
Consul
Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality.
doozerd, etcd
In your specific case
Don't use dot-env
Use pure process.env.foo
Deploy a remote variables manager in your openshift infraestructure
Create just one variable in your openshift web console: APP_ENVIRONMENT
In your code at the start, do something like this:
if (process.env.APP_ENVIRONMENT === "PROD")
//get variables from remote service using
//some http client like axios, request, etc
//then inject them to your process.env
process.env.site_url = remoteVariables.site_url
else
//we are in local developer workspace
//so, nothing complex is required
//developer should inject manually
//before the startup: npm run start or dev
//export site_url = "acme.com"
If you can configure an execution of a shell script before the start of your openshift app, you could load and expose the variables at that stage and the previous snippet would not be necessary because the variables will be ready to be retrieved using process.env directly in your app

How to find the deployment environment of an azure function app in code

Scenerio is that:
We have Azure DevOps and we can run a pipeline into one of x number of named environments
We make use of Azure App Configuration, and labels for the values for each environment. So for each setting, it might have a different value depending on the label
It occurs to me that if i match up the label to the same as the names of the environments, then in code, when i get the config value, if I can somehow determine what environment I've been deployed to (speaking from the code's point of view) then i can just pass this variable when getting the app config and i will have the correct config settings for my environment.
var environment = // HERE find my deployed to environment as in pipeline (1.)
var credentials = new DefaultAzureCredential();
configurationBuild.AddAzureAppConfiguration(options =>
{
options.Connect(settings.GetValue<string>("ConnectionStrings:AppConfig"))
.Select(KeyFilter.Any, LabelFilter.Null)
.Select(KeyFilter.Any, labelFilter: environment);
});
I was thinking that the solution would be something of the form of setting the environment in the azure-pipelines.yaml where the pipeline somehow knows the choice of environment and then reading it in code back out of the environment variable. but i dont know how to do that, or if there is a better way to do it? Thanks in advance for any help offered.
You can use the pipeline variables to pass the environment value to your code. The pipeline variables you defined in azure-pipelines.yaml will get injected as environment variables for your platform, which allows you to get their values in your code using Environment.GetEnvironmentVariable().
So you can define a pipeline variable in the azure-pipelines.yaml like below example(ie.DeployEnv):
parameters:
- name: Environment
displayName: Deploy to environment
type: string
values:
- none
- test
- dev
variables:
DeployEnv: ${{parameters.Environment}}
trigger: none
pool:
vmImage: 'windows-latest'
Then you can get the pipeline variable (ie.DeployEnv) in you code like below:
using System;
var environment = Environment.GetEnvironmentVariable("DeployEnv");
var credentials = new DefaultAzureCredential();
....
Another workaround is to define an environment property in the config(eg.web.config) file. And you can read the environment property in your code. In the pipeline you need to add tasks to replace the value of the environment property in the config file. See this thread for more information.

Using environment variables in Karate DSL testing

I'd like to incorporate GitLab CI into my Karate testing. I'd like to loop through my tests with different user names and passwords to ensure our API endpoints are responding correctly to different users.
With that in mind, I'd like to be able to store the usernames and passwords as secure environment variables in GitLab (rather than in the karate-config as plain text) and have Karate pull them as needed from either the karate-config or the feature files.
Looking through the docs and StackOverflow questions, I haven't seen an example where it's being done.
Updating with new information
In regards to Peter's comment below, which is what I need I am trying to set it up as follows:
set client id in karate-config:
var client_id = java.lang.System.getenv('client_id');
in the actual config object:
clientId: client_id
In my feature file tried to access it:
* def client_id = clientId
It still comes through as null, unfortunately.
You can read environment variables in karate using karate.properties,
eg,
karate.properties['java.home']
If this helps you to read the environment variables that you are keeping securely on your gitlab, then you can use it in your karate-config for authentication.
But your config and environment variable will look cumbersome if you are having too many users.
If you want to run a few features with multiple users, I would suggest you look into this post,
Can we loop feature files and execute using multiple login users in karate
EDIT:
Using java interop as suggested by peter:
var systemPath = java.lang.System.getenv('PATH');
to see which are all variables are actually exposed try,
var evars= java.lang.System.getenv();
karate.log(evars);
and see the list of all environment variables.

How can I set environment variables for dredd testing in a dredd.yml file?

I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}

Resources