Secure way to access encryption keys and credential data in nodejs environment? - node.js

I am running nodejs apps access resources on different servers, and I need to store the keys somewhere. Now, it is considered a bad idea to check-in user credentials or encryption keys in code repository. What is the best way to manage these keys in a secure manner?
One option I found is to save them in environment variables, and then later read their value when needed in script. Two methods I know of to populate env variables are: set them while running the script on command line (e.g USER_KEY=12345 node script.js) or let them be read from a local file on hard drive. But I want to access these keys on a random CI pipeline machine, so cant store file on disk and cant load them from command line.
Second option is to read these keys from a remote machine. Is there a known Crypto/Key Management Service or another popular NodeJS way to save credentials in a key store and then let script read it from there at run-time. Is there another way?

Related

Safely store password for automatic restic backup

Context
platforms: archlinux and ubuntu
I have a shell script that backs up my data to a restic server. In order to perform the backup, the script needs access to the restic repository password. There are multiple ways to provide restic with the password (user input, env variable, shell command, file) and I am currently saving the password as plaintext in a file.
Problem
This file is only accessible to root (the script runs as a systemd service as root) but it does not make it particularly secure. Anyone getting access to my laptop could recover my backup password. I know I can change the password of a repository if my laptop gets stolen but I am looking for a solution that does not involve human intervention. I looked for how people more experienced than me do that but could not find any better way.
The user input method does not suit me as I want the script to be fully automated.
The environment variable method only moves the problem as this variable needs to be set at some point and stored in a file.
A shell command could maybe decrypt the password from a file but that also just moves the problem to store the decryption key. However, if the decryption key could be handled by the system in a secure way that could work. I don't have any experience in that so I don't know where to look but this is the most promising way I found.
Question
Is there a secure way to store the password of a restic repository in order to perform automatic backups that would prevent an attacker (that gets access to the machine) from recovering the password?
PS
I want to avoid manually entering the password. I want the script to be fully automatic. I am looking for some kind of lock on the password file that would open when I am logged in. I have no idea if such a thing exists.
Thanks!
you need to customize the restic.exe to get the password from some storage vault in web ( check this and do the modification in code to get the password from storage vault ) https://github.com/restic/restic
Some thing like this.
Restic should not take password from commandline/ environmental variable.
Restic should take password from some storage vault (that has the password ) For this you need to customize it by changing the restic code base.
You can also encrypt the password stored in vault and decryption will happen in the restic.exe ( this require additional changes)
You need learn go language and to this implementation as restic is implemented in golang
It might take a week time to do the implementation

Bind NodeJS app variables to Pivotal Cloud Foundry Service

I am looking to bind a PCF (Pivotal Cloud Foundry) Service to allow us to set certain api endpoints used by our UI within PCF environment. I want to use the values in this service to overwrite the values in the root directory file, 'config.json'. Are there any examples out there that accomplish this sort of thing?
The primary way to tackle this is to have your application do this parsing. Most (all?) programming languages give you the ability to load environment variables and to parse JSON. Using these capabilities, what you'd want to do is to read the VCAP_SERVICES environment variable and parse the JSON. This is where the platform will insert the information from your bound services. From there you, you have the configuration information so you can configure your app using the values from your bound service.
Manual Ex:
var vcap_services = JSON.parse(process.env.VCAP_SERVICES)
or you can use a library. There's a handy Node.js library called cfenv. You can read more about both of these options in the docs.
https://docs.cloudfoundry.org/buildpacks/node/node-service-bindings.html
If you cannot read the configuration inside of your application, perhaps there's a timing problem and you need the information before your app starts, you can use the platform's pre-runtime hooks.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
The runtime hooks allow your application to include a file called .profile which will execute before your application. The .profile file is a simple bash script which can do anything needed to ready your application to be run. The only catch is that this needs to happen very quickly because it must complete before your application is able to start up and your application has a finite amount of time to start (usually 60s).
In your case, you could use jq to parse you values and insert them info your config file, perhaps using sed to overwrite a template value. Another option would be to run a small Node.js script, since your app is using Node.js it should be available on the path when this script runs, to read the environment variables and generate your config file.
Hope that helps!

Securely store Hash in Docker Image

I am building a series of applications using Docker and want to securely store my api keys, db access keys, etc. In an effort to make my application more secure, I am storing my configuration file in a password protected, zipped, volume set to read-only. I can use the ZipFile python package to access this to read in the configuration, including using a password.
However, I don't want to store the password explicitly in the image, for obvious reasons. I have played around with passlib to generate a hash for the password and compare. While I am fine with storing the hash in a file in the image, generating the hash I'd like to do without storing the value in a layer of the image.
Would it be good practice to do this? The Dockerfile I have in mind would look like the following:
FROM my_custom_python_image:3.6
WORKDIR /app
COPY . /app
RUN python -m pip install -r requirements.txt
RUN python create_hash.py --token 'mysecret' >> myhash.txt
# The rest of the file here
And create_hash.py would look like:
from passlib.hash import pbkdf2_sha256
import argparse
# Logic to get my argparse token flag
hash = pbkdf2_sha256.encrypt(args.token, rounds=200000, salt_size=16)
print(hash)
If my Dockerfile is not stored in the image and the file system is read only, is the value I put to --token stored? If if is, what's a good workaround here? Again, the end goal is to use context.verify(user_token, hash) to pass the user_token to ZipFile and not explicitly store the password anywhere
you should pass these values as part of the run time deployment, not build time.
It makes your application more flexible (as it can be used in different environments with only parameter changes) and more secure as the keys are simply not there.
To pass values securely during deployment depends more on the deployed environment and features
Anything in a RUN command will be later visible via docker history.
The most secure readily accessible way to provide configuration like passwords to an application like this is to put the configuration file in a host directory with appropriate permissions and then use docker run -v or a similar option to mount that into the running container. Depending on how much you trust your host system, passing options as environment variables works well too (anyone who can run docker inspect or anyone else with root access on the system can see that, but they could read a config file too).
With your proposed approach, I suspect you will need the actual password (not a hash) to decrypt the file. Also configuration by its nature changes somewhat independently of the application, which means you could be in a situation where you need to rebuild your application just because a database hostname changed, which isn't quite what you usually want.

Bundle Git SSH keys into a private AMI

I have an EC2 instance which runs an app hosted on a private git repo.
I need to be able to launch many of these from my master server. At the moment, I have 5 fixed "worker" instances which I start/stop from the master with no problem. Each worker starts, pulls the repo, and launches the app on startup. This is obviously not a good solution and I want to make it more flexible (launch as many instances as I want, etc). The configuration and packages are final so I feel good about bundling it all into an AMI.
Is there a way for me to bundle my git keys into the AMI, in order to launch many similar instances and have them all pull and launch my app on startup without heving to connect to each of them and enter the password? Is there a better way? I've read about cloud-init, user-data, puppet and many other things, but I'm quite novice in the matter and couldn't find a proper example using ssh keys.
Instead of bundling the keys into the AMI, I suggest you keep them separate from the AMI because:
If you change your git keys, you don't have to build a new AMI
Unauthorized users who have privileges to launch an instance from your AMI cannot launch your app
I suggest using the user-data feature. You can optionally encrypt your keys and base64encode it if you want to. When you launch your instance manually or using CLI/API, you can pass your keys which can be accessed by the instance once it is launched. There are variety of ways to access the data (python, curl to name a few). I suggest you use AWS metadata server because your instance does not need your AWS credentials to fetch the user-data. Once your instance is launched, have your app make the following call, get the keys and then pull the repo:
curl http://169.254.169.254/latest/user-data
returns your metadata (no credentials needed). You can optionally base64decode and decrypt your keys and use it to pull the repo. If you do not want the extra security, you can bypass encrypt/base64 part.

Windows 7 sharing data between users via the registry

Where can I create/modify/delete registry keys to share data between users in the Windows 7 registry? Both of the users are non administrators and it shouldn't require admin privileges.
The application I'm working on uses the registry to write a key from userA and then userB can read/modify/delete it. Neither user has admin privileges and it won't be possible to change this.
Is there an official MSDN guide to how to use the registry in Windows 7? Any links describing proper use of the registry would be useful.
You cannot access HKLM without elevation, so you simply cannot do what you described.
I suggest some of the following:
1. Choose other data storage, eg. database, file, etc. that all your users can access.
2. Create a windows service running as LocalSystem (that gives RW access to HKLM) and make your apps talk to the service via named pipes/COM/a socket.
The registry is for writing configuration settings, not for sharing data between users, you're really using it for the wrong purpose.
However, if you have to, the only place in the registry that would make sense even a little would be in the HKEY_LOCAL_MACHINE hive, in Software\yourapp, but I'm fairly sure that there is nowhere in there that's writeable by normal users by default.
If you are able to, you could create that key and then change the permissions for the users group so that they have full access.
This wiki article might help in seeing how the registry is best used.
On Windows 7, access to HKLM is only for apps running as admin. If you have no manifest on the app, it will virtualize, meaning write to a different per-user storage.
I think you should use a config file in a per-application location that is not per-user, like %PROGRAMDATA%, and have your setup/install (which probably does run as admin) write a single key that tells where this file is. The non admin users can then easily read and write the file while using the application.
The registry is not really the right way to do this. Can you give us some more details about what you're actually trying to do?
Are the users logged in at the same time? In this case, some kind of interprocess-communication (IPC) mechanism might work. For example: named pipes, shared memory, sockets, etc.
If not, will you have a process running at all times (i.e. a service)? This could be used as a sort of drop-box mechanism.
If you've got an installer, you could create a directory that's accessible to both users (put them in the same group, for simplicity's sake). Then you could drop message files in there.
In short: the registry is really designed for long-lived configuration settings. Short-lived communications really ought to be done some other way.

Resources