How to simple edit local config file throuth API - node.js

In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?

You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.

Related

How can i securely store account data in an open source app?a

I need to store user account data for my open source app such as email, hashed password, favourites etc.
The two options I have considered for this are: Storing info on a MongoDB Atlas database or storing in a JSON file.
Since the app is open source on GitHub, these two options create some problems. If MongoDB is used, then my API key will be exposed in the source code, which isn't that great. It will also make it difficult for people to test the project locally. If a local JSON file is used, users will be able to see all of a users sensitive info stored inside the repository, which definitely isn't good. What are my options here to be able to securely and easily control account creation and data storage? Cheers.
Since the app is open source on GitHub, these two options create some problems. If MongoDB is used, then my API key will be exposed in the source code...
This is a good case to use environment variables on your local machine. In the code you can do something like
process.env.<APPLICATION_NAME>_API_KEY;
If the developers you're working with is trusted enough to handle your database credentials then you can share it behind closed doors.
It will also make it difficult for people to test the project locally...
This is good practice for whatever future endeavors you might have with nodejs or other software development. Testing should never be made on production databases. If the database you're testing on isn't the production one, that's great, but there's still the issue with using sensitive user data for testing purposes. Banks hopefully aren't using real accounts to test transactions. What if there's an error and the account ends up drained with no log of how much was in it from the start?
I would recommend you setup a way to fill a database with dummy data for testing purposes. This could be a SQL script which you then can commit to your repo and update your instructions with how new developers and contributors can spin up their own database for testing purposes.

Externalize configuration node js

I am gonna deploy a node js service in openshift and there are few properties such as database configs and app properties which I need to externalize.
I have java applications running as part of solution which uses config server as config store and GIT as source. I have seen libs for npm to integrate with spring config server.
So, I am looking for best practices here, what would be best approach for externalizing configs in nodejs in orchestration tools like k8s or openshift. Or can we go with config server int the above scenario?
Please let know of any info , any pointers are highly appreciated.
There are multiple possibilities, one being the Cloud Config Server as you noted. However, the naive approach according to the Twelve-Factor App, the config should be stored in the environment:
The twelve-factor app stores config in environment variables
In OpenShift / Kubernetes, this means that we will store the configuration in the Deployment itself, in ConfigMaps or Secrets and then use these with envFrom.configMapRef (here is an example).
If you are moving towards orchestration tools, I would say use their offering. In k8s, you would typically use ConfigMaps to manage your application configs. The beauty of this solution is that you can also do Configuration as Code, so you keep your Configmaps version-controlled.
One more thing, NodeJs best practices is to use environment variables. So you can use orchestration offering to mount all your configs to the environment, plus you get secrets encryption for your sensitive info (API keys, etc..)
For anyone if it would help, we went for environment variable approach since we had very minimal parameters to work with and we don't see much change in this approach. If it grows we would be looking at the configmap approach (as also suggested by simon / obanby) above.

How to centralize configurations across multiple node services?

There are multiple node services currently deployed and running through pm2 in aws environment.
Difficulty(in terms of maintenance) I see in my current code base is that each of these node services have a a separate configuration file (config\app.json) - Though, most of the properties in these configuration files are common for all the services, each of the property is mentioned in each individual service in code. If there is a change is any of these properties, I will have to modify the change in multiple places.
I would like to centralise the configurations across multiple node services. Is there a way to do that? Expectation is to have a centralised place for maintaining configurations. Any references would help.
I am not sure how your architecture is but if you do not mind creating a small library or microservice, which will just fetch you configurations from a small NoSQL database such as Redis which stores key-value pairs, then it will provide you with configurations at a centralized place.
Now the only configuration remains here is of redis which you can add while building the service by providing it's configuration as an environment variable using some thing like yargs.
Then in every service you'll have to make only one API call to fill up your config json in your case config/app.json

Right way to store sensitive credentials for web app

I have a Java web app running on EC2 under Tomcat (a WAR) that requires various sensitive configuration parameters - for example, the credentials associated with various other AWS services. I had been setting these as environment variables, but then discovered that running Tomcat as a service removes almost all environment variables. So currently I use a simple configuration file to store these values.
I don't believe this is a wise choice going forward, however, and would like to find an alternative. What is the right way to handle this kind of sensitive information?
IAM Roles are going to be your best friend here. The official docs here will point you in the right direction. There's also a post on the AWS security blog about it here.

GAE: best practices for storing secret keys?

Are there any non-terrible ways of storing secret keys for Google App Engine? Or, at least, less terrible than checking them into source control?
In the meantime, Google added a Key Management Service: https://cloud.google.com/kms/
You could use it to encrypt your secrets before storing them in a database, or store them in source control encrypted. Only people with both 'decrypt' access to KMS and to your secrets would be able to use them.
The fact remains that people who can deploy code will always be able to get to your secrets (assuming your GAE app needs to be able to use the secrets), but there's no way around that as far as I can think of.
Not exactly an answer:
If you keep keys in the model, anyone who can deploy can read the keys from the model, and deploy again to cover their tracks. While Google lets you download code (unless you disable this feature), I think it only keeps the latest copy of each numbered version.
If you keep keys in a not-checked-in config file and disable code downloads, then only people with the keys can successfully deploy, but nobody can read the keys without sneaking a backdoor into the deployment (potentially not that difficult).
At the end of the day, anyone who can deploy can get at the keys, so the question is whether you think the risk is minimized by storing keys in the datastore (which you might make backups of, for example) or on deployer's machines.
A viable alternative might be to combine the two: Store encrypted API keys in the datastore and put the master key in a config file. This has some potentially nice features:
Attackers need both access to a copy of the datastore and a copy of the config file (and presumably developers don't make backups of the datastore on a laptop and lose it on the train).
By specifying two keys in the config file, you can do key-rollover (so attackers need a datastore/config of similar age).
With asymmetric crypto, you can make it possible for developers to add an API key to the datastore without needing to read the others.
Of course, then you're uploading crypto to Google's servers, which may or may not count as "exporting" crypto with the usual legal issues (e.g. what if Google sets up an Asia-Pacific data centre?).
There's no easy solution here. Checking keys into the repository is bad both because it checks in irrelevant configuration details and because it potentially exposes sensitive data. I generally create a configuration model for this, with exactly one entity, and set the relevant configuration options and keys on it after the first deployment (or whenever they change).
Alternately, you can check in a sample configuration file, then exclude it from version control, and keep the actual keys locally. This requires some way to distribute the keys, though, and makes it impossible for a developer to deploy unless they have the production keys (and all to easy to accidentally deploy the sample configuration file over the live one).
Three ways I can think of:
Store it in DataStore (may be base64 encode to have one more level
of indirection)
Pass it as environment variables through command-line params during deployment.
Keep a configuration file, git-ignore it and read it from server. Here this file itself can be a .py file if you are using a python deployment, so no reading & storing of .json files.
NOTE: If you are taking the conf-file route, dont store this JSON in the static public folders !
If you are using Laravel and want to store your keys in Datastore - this package can make that easy while managing performance using caching. https://github.com/tommerrett/laravel-GAE-secret-manager
Google app engine by default create credential for app engine and inject it in side the environment.
Google Cloud client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, Cloud Run, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
So point 2 means if you grant the permissions to your service account using IAM Admin you do not have to worry about the passing json keys it will aromatically works.
eg.
Suppose your application running in App Engine Standard and it wants the access to the Google Cloud Storage. To do this you do not have to create new service account just grant the access to the ADC.
REF https://cloud.google.com/docs/authentication/production#finding_credentials_automatically

Resources