How can I set environment variables for dredd testing in a dredd.yml file? - node.js

I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.

YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.

The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}

Related

Rails 6+: order in which Rails reads SECRET_KEY_BASE (env var versus credentials.yml.enc)

For context, I'm in the process of updating a Rails app to 5.2 and then to 6.0.
I'm updating my credentials to use the config/credentials.yml.enc and config/master.key defaults with Rails 5.2+ apps.
The Rails docs state:
In test and development applications get a secret_key_base derived from the app name. Other environments must use a random key present in config/credentials.yml.enc
(emphasis added)
This leads me to think that in production the SECRET_KEY_BASE value is required to be read from Rails.application.credentials.secret_key_base via config/credentials.yml.enc. In test and development environments, the secret_base_key is essentially "irrelevant", since it's calculated from the app name.
However, when I was looking at the Rails source code, it reads:
def key
read_env_key || read_key_file || handle_missing_key
end
That seems to say the order of reading values is:
ENV["SECRET_BASE_KEY"]
Rails.application.credentials.secret_base_key
Raise error
I use Heroku for my hosting, and have a ENV["SECRET_BASE_KEY"] env variable that stores this secret value.
Questions
If I have both ENV["SECRET_BASE_KEY"] and Rails.application.credentials.secret_base_key set, which one takes priority?
Is using the ENV var going to be deprecated at some point?
I have lots of environment-specific ENV variables because I don't want to use my production accounts in development for AWS S3 buckets, stripe accounts, etc. The flat-file format of credentials.yml.enc seems to assume developers only need to access these 3rd-party APIs in production. Is there an accepted format to handle environment-specific credentials yet in Rails?
I read through the comment threads on DHH's original PR as well as a linked PR that says it implements environment-specific credentials, but the docs don't mention this implementation so I'm not certain if it's the standard or if it's going to go away sometime soon.

How to add token as env vars for tavern api testing

I am new to Tavern API Testing and I am trying to pass a token as an env var (my api is written in nodejs). Here is my code
test_name: POST /logs
marks:
- post_logs
stages:
- name: post a log entry
request:
url: "{host:s}:{port:d}{base_path:s}/investigate/api/{version:s}/logs"
method: POST
headers:
Authorization: "Basic {tavern.env_vars.TOKEN}"
content-type: application/json
params:
body:
log: blahblahblah
response:
status_code: 204
My problem is I do not know where to add my token in env_vars? is it a special .env tavern file I need to add?
You need to define your token as an environment variable in the shell from which you run Tavern tests. There are many ways to define an environment variable. My examples use Bash syntax; you may need to look up the right syntax if you are using a different shell. For testing with a short-lived token, you can define an environment variable right on the same command line that runs the tests:
TOKEN="some_token_value" py.test
The problem with that approach is that the token value gets saved in your shell command history, which is not a good security practice. A better approach is to create a file to store confidential data like a long-lived authentication token. The file name does not matter, but a common choice is .env. The contents of the file should be:
export TOKEN="some_token_value"
If using Git, add .env to your .gitignore file so that credentials are never added to your repo. Source the .env file to set the environment variables prior to running tests:
source .env
py.test
Environment variables only last as long as the shell session, so you need to source the file every time you open a new shell (terminal window or SSH session).

Using environment variables in Karate DSL testing

I'd like to incorporate GitLab CI into my Karate testing. I'd like to loop through my tests with different user names and passwords to ensure our API endpoints are responding correctly to different users.
With that in mind, I'd like to be able to store the usernames and passwords as secure environment variables in GitLab (rather than in the karate-config as plain text) and have Karate pull them as needed from either the karate-config or the feature files.
Looking through the docs and StackOverflow questions, I haven't seen an example where it's being done.
Updating with new information
In regards to Peter's comment below, which is what I need I am trying to set it up as follows:
set client id in karate-config:
var client_id = java.lang.System.getenv('client_id');
in the actual config object:
clientId: client_id
In my feature file tried to access it:
* def client_id = clientId
It still comes through as null, unfortunately.
You can read environment variables in karate using karate.properties,
eg,
karate.properties['java.home']
If this helps you to read the environment variables that you are keeping securely on your gitlab, then you can use it in your karate-config for authentication.
But your config and environment variable will look cumbersome if you are having too many users.
If you want to run a few features with multiple users, I would suggest you look into this post,
Can we loop feature files and execute using multiple login users in karate
EDIT:
Using java interop as suggested by peter:
var systemPath = java.lang.System.getenv('PATH');
to see which are all variables are actually exposed try,
var evars= java.lang.System.getenv();
karate.log(evars);
and see the list of all environment variables.

Is it possible to set access_token in netlify.toml file?

netlify command-line lets you specify the access_token either through ~/.config/.netlify or the -A switch.
However I was wondering whether it'll be accepted through the ./netlify.toml config file.
In the docs there seem to be fields that suggest it might:
[context.production]
environment = { ACCESS_TOKEN = "super secret", NODE_ENV = "8.0.1" }
[context.deploy-preview.environment]
ACCESS_TOKEN = "not so secret"
But when I try it gives the error "No access token found. Please login." (from debug logs)
So, is it possible to set the access_token through ./netlify.toml file, and if so what am I doing wrong?
If not, what do the ACCESS_TOKEN mentioned in the docs actually do, and how are they different from the access_token found in ~/.config/.netlify file?
So, is it possible to set the access_token through netlify.toml file, and if so what am I doing wrong?
The netlifyctl command line sets up the access_token in a config file not an environment variable, so the ACCESS_TOKEN environment variable would not be used by the netlifyctl command at the time of this answer.
If not, what do the ACCESS_TOKEN mentioned in the docs actually do, and how are they different from the access_token found in ~/.config/netlify file?
The ACCESS_TOKEN mentioned in the docs is just an example of how to set an environment variable for use in a script or build process at the time of a deploy on netlify. These two are not one in the same and have nothing to do with each other in this case. In theory, you could use the environment variable to build a script to run the netlifyctl -A using the environment variable to pass the access token to the command.
NOTE: Do not put secret tokens into your netlify.toml file or .env files of a public repository for Netlify. In fact, be careful when using secret keys on public repositories in Netlify. These keys could be exposed by a commit or pull request by someone else or by accident. There is an explanation here of how to build a .env file to create environment variables from "Build Environment Variables" section for use in build scripts in a private repository.

Is there any better way to pass sensitive data to the program, alternative to env variables?

I want to use npm twitter package, and it recommends to use env variables, but setting it up on windows machines is horror, so I want to avoid env variables. Next try is keep variables in external json file (like here in my repo), which is never be committed, but it’s playing not good with CI, because if it’s not in the repo, how can I use it and test, right?
Let me show.
env variables (windows users’ nightmare):
var Twitter = require('twitter');
var client = new Twitter({
consumer_key: process.env.TWITTER_CONSUMER_KEY,
consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET,
});
untestable crap
var Twitter = require('twitter');
var keys = require('./keys.json');
var client = new Twitter(keys);
with this line in .gitignore:
keys.json
no winners situation; is there any better way?
There are no winners in this situation and it makes me sad. I want to achieve two simple goals: easy consumption and testability. Can you help me? How do you deal with this?
Update: I’m talking in terms of developing opensource lib based on twitter API, not about end user product, that’s I feel unsecure about keeping tokens in repo.
Update 2: Windows users’ have set and setx commands. Hurray! thx to Martin Konecny for noting this.
Solution: while there is no crap in setting up env variables in windows, it’s better to let code consumer to choose how to pass data to his end-product (which is using my lib). So we end up with situation, which has no "data-passing" problem. And because of it it’s testable, because I can use env variables in my tests to test it in Travis CI.
just let your user choose what's best for him. implement (or use library as there are such libraries for most languages) something that will let you pass and shadow properties in many different formats: api, file, envs, command line.
then:
in your local test you can simply use api as part of test configuration
in your integration tests you can put json file (ignored by git)
on travis you can use command line parameters or environment properties
on production you will use environment properties or remote server with configuration
Seeing as this is an open-source project, you will most likely not be able to include your Twitter API keys inside the project itself. I see two potential solutions:
Require the users to register for their own Twitter API
credentials, and add these to your project's config file before
running the project and its tests.
If you are trying to use
something like Travis CI to auto-test any new commits, you may need to mock your requests instead.
Option #2 may not be ideal since it doesn't take into account any future API changes from Twitter, however it allows you to test breakage of any commits assuming the API does remain stable.

Resources