Hiding secrets in intake catalog for remote access (S3/MinIO) - intake

I'm trying to build an intake catalog for my team. The datasets are on a shared MinIO server for which each user should have their own service account, and therefore a key/secret pair.
When creating the first catalog entry like this:
source = intake.open_netcdf(
"s3://bucket/path/to/file.netcdf",
storage_options = storage_options
)
where storage_options is a dictionary (read from a json file that the user should have in their file system) containing:
{
'key': 'KEY',
'secret': 'SECRET',
'client_kwargs': {'endpoint_url': 'http://X.X.X.X:9000'}
}
i.e. the necessary credentials for s3fs to access the MinIO server; I get a catalog entry containing the secrets:
sources:
my_dataset:
args:
storage_options:
client_kwargs:
endpoint_url: http://X.X.X.X:9000
key: KEY
secret: SECRET
urlpath: s3://bucket/path/to/file.netcdf
description: 'my description'
driver: intake_xarray.netcdf.NetCDFSource
Now this catalog file shouldn't be shared because it contains secrets, defeating the purpose of having a catalog. My question then is: how do I make the storage_options part be read from the secrets file that the user will have? (ideally without having to change from json to yaml, but it's not a requirement)

Fortunately, AWS already provides for doing this, either via environment variables or files placed in special locations ( https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#environment-variables and below).
Intake also has ways of templating values, but these ultimately end up in using the environment or getting values directly from the user. Additionally, your case is complicated by needing these values not in a top-level parameter, but nested inside storage_options. We could probably improve this system, but it would still beg the question, where should the secret values come from?

Related

Why ASP.NET Core AddDataProtection Keys cannot be loaded from AppSettings.json

I want to know why there isn't an easy way to load your security keys from the AppSettings.json instead of loading them off the file system as XML?
Here is the example from the Microsoft documentation.
services.AddDataProtection()
.PersistKeysToFileSystem("{PATH TO COMMON KEY RING FOLDER}")
.SetApplicationName("SharedCookieApp");
services.ConfigureApplicationCookie(options => {
options.Cookie.Name = ".AspNet.SharedCookie";
options.Cookie.Path = "/";
});
I'm just wondering why there isn't something like the following.
services.AddDataProtection()
.PersistKeysToAppSetings("EncryptionKeys")
.SetApplicationName("SharedCookieApp");
services.ConfigureApplicationCookie(options => {
options.Cookie.Name = ".AspNet.SharedCookie";
options.Cookie.Path = "/";
});
I don't understand why storing keys in an XML file would be any different than storing them in your AppSettigs.json. Now I know the format is different, however it's no more or less secure? correct?
I just want to be sure I'm not missing something.
Assumptions:
AppSettings.json is just as secure as some other XML file on disk
Azure AppSettings are securely stored and can only be access by permitted accounts
Azure AppSettings values would override any uploaded "developer" values
Developers will not store their production keys in source, surely right? :)
I know this would not work for expiring / recycling keys
"It's complicated"
We create keys on demand.
We create multiple keys, just before a key expires, we create a new one.
We need keys to be synchronized between applications.
We need keys to be encrypted where possible.
AppSettings does not give us any of those things, applications can't update their own settings files so that rules out 1 and 2, web sites don't copy a changed app settings file between instances which rules out 3, and you can't encrypt things in app settings which rules out 4.
The format isn't the problem, you could write your own encryption wrapper to cope with #4, but the rest is still necessary, so now you have to change how settings works so they're read/write (and safely read write), and then persuade web hosts to support synchronization of your custom settings file between instances.

Require key file by path in Google Cloud Function

I have the functions for Google Cloud Function that needs to require json key file.
For example:
const SERVICE_ACCOUNT_EMAIL = 'your_service_account_email#developer.gserviceaccount.com';
const SERVICE_ACCOUNT_KEY_FILE = require('./path/to/your/service_account.json');
const jwtClient = new google.auth.JWT(
SERVICE_ACCOUNT_EMAIL,
null,
SERVICE_ACCOUNT_KEY_FILE.private_key,
['https://www.googleapis.com/auth/androidpublisher'],
null
);
How I can get access for SERVICE_ACCOUNT_KEY_FILE? Where should I upload the file and how next do I will find its path?
This is a key from the private/public key pair that you create. It will be wrapped inside a JSON (at least that is the common usage). See examples here and documentation here. Never share your private key, but you can freely share your public key (in fact it is required that you share your public key). And don't put your private key into a source control (git repo).
You can find a lot of good information about asymmetric asymmetric cryptography (public key cryptography) online.
In case you are wondering where to add the key file, it should be here: https://www.npmjs.com/package/google-oauth-jwt#creating-a-service-account-using-the-google-developers-console
UPDATE:
Where should you add the file to be able to access it:
One of the best practice is to put the file in a bucket, to allow the function service account (the default one or a specific identity) to read this bucket and to load the file at runtime. One of the main reason is that you don't have to commit your security file, you just have to keep a reference to a bucket.
Reference

NodeJS and storing OAuth credentials, outside of the code base?

I am creating a NodeJS API server that will be delegatiing authentication to an oauth2 server. While I could store the key and secret along with the source code, I want to avoid doing that since it feels like a security risk and it is something that doesn't match the lifespan of a server implementation update (key/secret refresh will likely happen more often).
I could store it in a database or a maybe a transient json file, but I would like to know what are the considered the best practices in the NodeJS world or what is considered acceptable. Any suggestions are appreciated.
One option would be to set environment variables as part of your deployment and then access them in the code from the global process object:
var clientId = process.env.CLIENT_ID
var clientSecret = process.env.CLIENT_SECRET
Since I wanted to provide something that can store multiple values, I just created a JSON file and then read that into a module I called keystore (using ES6 class):
class KeyStore {
load() {
// load the json file from a location specified in the config
// or process.env.MYSERVER_KEYSTORE
}
get (keyname) {
// return the key I am looking for
}
}
module.exports = new KeyStore();
I would ideally want to store the file encrypted, but for now I am just storing it read only to the current user in the home directory.
If there is another way, that is considered 'better', then I am open to that.

New user roles typo3 neos

I need to add new user roles, such as "TYPO3.Neos:Creator"
Typo3-neos Currently supported roles:"TYPO3.Neos:Editor", "TYPO3.Neos:Administrator". How can I do it?
Not sure, but it seems available roles are not stored in database, but rather are gathered from yaml configuration files (and stored in cache??).
So, add a role in any Policy.yaml file, like:
roles:
'My.Package:CreatorOfDoomRole':
privileges: []
After that you can use the flow CLI command ./flow user:addrole <username> <role> to add a new role to a user (the roles are stored as comma-separated list in table typo3_flow_security_account, field roleidentifiers).
(Some more info about how yaml is cached: "The yaml files are cached, in development context that cache should be purged on every request (and on master that's a bit optimized so they will only be flushed in development context if there was really a change to the yaml). Stored in file: Data/Temporary/Production/Configuration/ProductionConfigurations.php")

What is the laravel way of storing API keys?

Is there a specific file or directory that is recommended for storing API keys? I'd like to take my keys out of my codebase but I'm not sure where to put them.
This is an updated answer for newer versions of Laravel.
First, set the credentials in your .env file. Generally you'll want to prefix it with the name of the service, so in this example I'll use Google Maps.
GOOGLE_KEY=secret_api_key
Then, take a look in config/services.php - it's where we can map environment variables into the app configuration. You'll see some existing examples out of the box. You can add additional configuration under the service name and point it to the environment variable.
'google' => [
'key' => env('GOOGLE_KEY'),
],
Then when you need to access this key within your app you can get it through the app configuration instead.
// Through a facade
Config::get('services.google.key');
// Through a helper
config('services.google.key');
Be sure not to just use env('GOOGLE_KEY) through your app - it's more performant to go through the app configuration as it's cached - especially if you call php artisan config:cache as part of your deployment process.
You can make your API keys environment variables and then access them that way. Read more about protecting sensitive configuration from the docs.
You simply create a .env.php file in the root of your project that returns an array of environment variables.
<?php
return array(
'SECRET_API_KEY' => 'PUT YOUR API KEY HERE'
);
Then you can access it in your app like so.
getenv('SECRET_API_KEY');

Resources