Rails 6+: order in which Rails reads SECRET_KEY_BASE (env var versus credentials.yml.enc) - credentials

For context, I'm in the process of updating a Rails app to 5.2 and then to 6.0.
I'm updating my credentials to use the config/credentials.yml.enc and config/master.key defaults with Rails 5.2+ apps.
The Rails docs state:
In test and development applications get a secret_key_base derived from the app name. Other environments must use a random key present in config/credentials.yml.enc
(emphasis added)
This leads me to think that in production the SECRET_KEY_BASE value is required to be read from Rails.application.credentials.secret_key_base via config/credentials.yml.enc. In test and development environments, the secret_base_key is essentially "irrelevant", since it's calculated from the app name.
However, when I was looking at the Rails source code, it reads:
def key
read_env_key || read_key_file || handle_missing_key
end
That seems to say the order of reading values is:
ENV["SECRET_BASE_KEY"]
Rails.application.credentials.secret_base_key
Raise error
I use Heroku for my hosting, and have a ENV["SECRET_BASE_KEY"] env variable that stores this secret value.
Questions
If I have both ENV["SECRET_BASE_KEY"] and Rails.application.credentials.secret_base_key set, which one takes priority?
Is using the ENV var going to be deprecated at some point?
I have lots of environment-specific ENV variables because I don't want to use my production accounts in development for AWS S3 buckets, stripe accounts, etc. The flat-file format of credentials.yml.enc seems to assume developers only need to access these 3rd-party APIs in production. Is there an accepted format to handle environment-specific credentials yet in Rails?
I read through the comment threads on DHH's original PR as well as a linked PR that says it implements environment-specific credentials, but the docs don't mention this implementation so I'm not certain if it's the standard or if it's going to go away sometime soon.

Related

How to use node package dotenv to access local development environment variables in Red Hat OpenShift application?

I'm revisiting a project which hasn't been updated for a while.
In production/online environment, it uses environment variables defined at:
openshift online console > applications > deployments > my node app > environment
In development/offline environment, it uses environment variables defined at:
./src/js/my_modules/local_settings (this file is ignored by .gitignore)
The code looks something like:
// check which environment we are in
if (process.env.MONGODB_USER) {
var online_status = "online";
}
else {
var online_status = "offline";
}
// if online, use environment variables defined in red hat openshift
if (online_status === 'online') {
var site_title = process.env.SITE_TITLE;
var site_description = process.env.SITE_DESCRIPTION;
//etc
}
// if offline, get settings from a local file
else if (online_status === 'offline') {
var local_settings = require('./src/js/my_modules/local_settings');
var site_title = local_settings.SITE_TITLE;
var site_description = local_settings.SITE_DESCRIPTION;
// etc
}
I would like to install the dotenv package in my local project repo via:
npm install dotenv
So that I can:
Have my local settings in a .env file in the root of my project (ignored in .gitignore)
Be able to use process.env.SOME_VARIABLE rather than local_settings.SOME_VARIABLE
Get rid of some if/else blocks as both scenarios would point to process.env.SOME_VARIABLE
I'm a bit confused as to how this would effect the online environment.
Seeing as both production/online and development/offline environments would use:
var some_variable = process.env.SOME_VARIABLE_HERE
would the application automatically know to:
Look at the local .env file when in development?
Look at the Red Hat environment variables when in production?
And would adding the required instantiation at the beginning of the server-side file:
require('dotenv').config()
somehow make Red Hat OpenShift freak out (as it seems to already have its own 'things' in place to resolve references to process.env.SOME_VARIABLE_HERE to the relevant values defined in the OpenShift console)?
To have a file by any environment (.dev .staging .prod) into the source code repository or manually in the server (it those are in .gitignore) worked for long time, but now it goes against to the devops.
The clean way is to use environment variables but managed remotely and obtained at the start of your application.
How it works?
Basically your apps don't read or need a file (.env .properties, etc) with variables anymore. It loads them from a remote http service.
Not intrusive
In this approach, you don't need specific languages variables (nodejs in your case). You just need to prepare your app to use environment variables. Your application don't care where the variables come from, just needs to be available at operative system level.
To achieve that, you just need to download the variables using a simple shell code or a very basic algorithm (http invocation) in your favorite language.
After that, after the start of your app, variables are ready to use at the most basic level.
var site_title = process.env.SITE_TITLE;
This approach is not intrusive because your app don't need something complex like library or algorithm in some programing language. Just needs the environment variables.
Intrusive
Same as previous alternative but instead to read the variables direct from environment system, you should use or create a class/module in your language. This offer your the variables you need:
var site_title = VariablesManager.getProperty("SITE_TITLE");
VariablesManager at the startup must have consumed the variables from a remote service (http) and the store them to offer them to whoever needs it through getProperty method.
Also this VariablesManager usually has a feature called hot-reload which at intervals, update the variables consuming the remote variables manager. With this, if your application is running in production with real users and some variable needs to be updated, you just need to change it in the variables manager. Automatically your app will load the new values, without restart or touching your app
This approach is intrusive because you need to load advanced libraries in some programing language or create it.
Devops
Your application just needs a few properties or settings related to the consume of remote variables. For example: variables of acme-web-staging:
remote_variables_manager = https://variables.com/api
application_id = acme-web-staging
secure_key = *****
You could hide the secure key and parametrize the application_id using environment variables (created in the platform console)
remote_variables_manager = https://variables.com/api
application_id = ${application_id}
secure_key = ${remote_variables_manager_key}
Or if you want one variable manager by each environment
staging
remote_variables_manager = https://variables-staging.com/api
application_id = acme-web
secure_key = *****
production
remote_variables_manager = https://variables-staging.com/api
application_id = acme-web
secure_key = *****
Variables manager
This concept was introduced many years ago. I used with java. It consist in a web application with features like:
secure login
create applications
create variables of an application
crypt sensitive values
publish http endpoints to download or query the variables by application
Here a list of some ready to use alternatives:
Configurator
Nodejs & mysql solution. I developed this and I use it in various projects.
Doppler
zookeeper
http://www.therore.net/java/2015/05/03/distributed-configuration-with-zookeeper-curator-and-spring-cloud-config.html
Spring Cloud
https://www.baeldung.com/spring-cloud-configuration
This is a java spring framework functionality in which you can create properties file with configurations and configure your applications to read them.
Consul
Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality.
doozerd, etcd
In your specific case
Don't use dot-env
Use pure process.env.foo
Deploy a remote variables manager in your openshift infraestructure
Create just one variable in your openshift web console: APP_ENVIRONMENT
In your code at the start, do something like this:
if (process.env.APP_ENVIRONMENT === "PROD")
//get variables from remote service using
//some http client like axios, request, etc
//then inject them to your process.env
process.env.site_url = remoteVariables.site_url
else
//we are in local developer workspace
//so, nothing complex is required
//developer should inject manually
//before the startup: npm run start or dev
//export site_url = "acme.com"
If you can configure an execution of a shell script before the start of your openshift app, you could load and expose the variables at that stage and the previous snippet would not be necessary because the variables will be ready to be retrieved using process.env directly in your app

Error running Vorto Dashboard for Bosch iot suite

I am trying to run Vorto dashboard on Raspberry Pi to visualize my Bosch IoT "things" data.
In order to run the Vorto Dashboard, I installed npm and nodejs and created the config.json file.
I am getting the below error whenever I try to run the dashboard using the command: sudo vorto-dashboard config.json, knowing that I already added the OAuth2 Client credentials.
No credentials given, can not get things
Could not get the token with given credentials. - StatusCodeError: 400 -
{"error":"unauthorized_client","error_description":"INVALID_CREDENTIALS:
Invalid client credentials"}
I am currently contributing the Vorto Project as an Intern at Bosch. Due to changes in the Vorto-Dashboard we combined and merged the functionality of a previous dashboard with another coexisting updated UI, providing advanced ways to visualize the existing devices.
As the uploaded state was work in progress, we temporarily disabled the config.json methodology and removed existing references from the documentation. Apparently, the reference in the tutorial you found was omitted, sorry for that!
Today, I deployed a new version 0.5.0 of the vorto-dashboard which should work as usual. You are now able to work with either process.env.[...] varibales or a config.json file. Thank you Mena for the quick response!
Feel free to let me know if you need any further help or have additional feedback.
TL;DR
To resolve your issue, store your OAUth credentials as environmental variables.
E.g. in debian et al., export BOSCH_CLIENT_ID=... etc., then start the dashboard in the same terminal.
Context
I was about to ask the same question, as I got the same error message no matter how I referenced the config.json file (relative path, absolute path, no reference, etc.).
For clarification, the tutorial pointing to a config.json resource for storing OAuth credentials is here.
Quoting:
While the dependencies are being installed, create the config.json file and insert client_id, secret and scope from your Already created
OAuth2 Client. The content of the file has to look like this:
{
"client_id": "<YOUR_CLIENT_ID>",
"client_secret": "<YOUR_CLIENT_SECRET",
"scope": "<YOUR_SCOPE>",
"intervalMS": 10000
}
The reference to the config.json file has been removed from the README.md resource in the vorto-dashboard module of vorto-examples.
The latest README.md suggests providing the OAuth credentials through environmental variables:
You can provide your OAuth2 credentials through environment variables.
The three environment variables you have to provide are:
BOSCH_CLIENT_ID
BOSCH_CLIENT_SECRET
BOSCH_SCOPE
[...]
Looking at the source, I can only find an explicit reference to a config.json in the start script entry for package_for_deployment.json (nor anything around the source seems to be consuming, say, argv[2] for that matter).
The AuthToken.js resource in charge of handling OAuth credentials only seems to reference environmental variables through the process.env.[...] references.
Elaboration
This is only speculation at the time of writing, but I suspect the reason why the config.json methodology has been abandoned might have something to do with strengthening security, i.e. not storing OAuth credentials permanently in a file.
If that much is true, then the tutorial page should probably be amended with the latest instructions from the README.md.

Using environment variables in Karate DSL testing

I'd like to incorporate GitLab CI into my Karate testing. I'd like to loop through my tests with different user names and passwords to ensure our API endpoints are responding correctly to different users.
With that in mind, I'd like to be able to store the usernames and passwords as secure environment variables in GitLab (rather than in the karate-config as plain text) and have Karate pull them as needed from either the karate-config or the feature files.
Looking through the docs and StackOverflow questions, I haven't seen an example where it's being done.
Updating with new information
In regards to Peter's comment below, which is what I need I am trying to set it up as follows:
set client id in karate-config:
var client_id = java.lang.System.getenv('client_id');
in the actual config object:
clientId: client_id
In my feature file tried to access it:
* def client_id = clientId
It still comes through as null, unfortunately.
You can read environment variables in karate using karate.properties,
eg,
karate.properties['java.home']
If this helps you to read the environment variables that you are keeping securely on your gitlab, then you can use it in your karate-config for authentication.
But your config and environment variable will look cumbersome if you are having too many users.
If you want to run a few features with multiple users, I would suggest you look into this post,
Can we loop feature files and execute using multiple login users in karate
EDIT:
Using java interop as suggested by peter:
var systemPath = java.lang.System.getenv('PATH');
to see which are all variables are actually exposed try,
var evars= java.lang.System.getenv();
karate.log(evars);
and see the list of all environment variables.

How can I set environment variables for dredd testing in a dredd.yml file?

I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}

Is there any better way to pass sensitive data to the program, alternative to env variables?

I want to use npm twitter package, and it recommends to use env variables, but setting it up on windows machines is horror, so I want to avoid env variables. Next try is keep variables in external json file (like here in my repo), which is never be committed, but it’s playing not good with CI, because if it’s not in the repo, how can I use it and test, right?
Let me show.
env variables (windows users’ nightmare):
var Twitter = require('twitter');
var client = new Twitter({
consumer_key: process.env.TWITTER_CONSUMER_KEY,
consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET,
});
untestable crap
var Twitter = require('twitter');
var keys = require('./keys.json');
var client = new Twitter(keys);
with this line in .gitignore:
keys.json
no winners situation; is there any better way?
There are no winners in this situation and it makes me sad. I want to achieve two simple goals: easy consumption and testability. Can you help me? How do you deal with this?
Update: I’m talking in terms of developing opensource lib based on twitter API, not about end user product, that’s I feel unsecure about keeping tokens in repo.
Update 2: Windows users’ have set and setx commands. Hurray! thx to Martin Konecny for noting this.
Solution: while there is no crap in setting up env variables in windows, it’s better to let code consumer to choose how to pass data to his end-product (which is using my lib). So we end up with situation, which has no "data-passing" problem. And because of it it’s testable, because I can use env variables in my tests to test it in Travis CI.
just let your user choose what's best for him. implement (or use library as there are such libraries for most languages) something that will let you pass and shadow properties in many different formats: api, file, envs, command line.
then:
in your local test you can simply use api as part of test configuration
in your integration tests you can put json file (ignored by git)
on travis you can use command line parameters or environment properties
on production you will use environment properties or remote server with configuration
Seeing as this is an open-source project, you will most likely not be able to include your Twitter API keys inside the project itself. I see two potential solutions:
Require the users to register for their own Twitter API
credentials, and add these to your project's config file before
running the project and its tests.
If you are trying to use
something like Travis CI to auto-test any new commits, you may need to mock your requests instead.
Option #2 may not be ideal since it doesn't take into account any future API changes from Twitter, however it allows you to test breakage of any commits assuming the API does remain stable.

Resources