Right way to store sensitive credentials for web app - security

I have a Java web app running on EC2 under Tomcat (a WAR) that requires various sensitive configuration parameters - for example, the credentials associated with various other AWS services. I had been setting these as environment variables, but then discovered that running Tomcat as a service removes almost all environment variables. So currently I use a simple configuration file to store these values.
I don't believe this is a wise choice going forward, however, and would like to find an alternative. What is the right way to handle this kind of sensitive information?

IAM Roles are going to be your best friend here. The official docs here will point you in the right direction. There's also a post on the AWS security blog about it here.

Related

Externalize configuration node js

I am gonna deploy a node js service in openshift and there are few properties such as database configs and app properties which I need to externalize.
I have java applications running as part of solution which uses config server as config store and GIT as source. I have seen libs for npm to integrate with spring config server.
So, I am looking for best practices here, what would be best approach for externalizing configs in nodejs in orchestration tools like k8s or openshift. Or can we go with config server int the above scenario?
Please let know of any info , any pointers are highly appreciated.
There are multiple possibilities, one being the Cloud Config Server as you noted. However, the naive approach according to the Twelve-Factor App, the config should be stored in the environment:
The twelve-factor app stores config in environment variables
In OpenShift / Kubernetes, this means that we will store the configuration in the Deployment itself, in ConfigMaps or Secrets and then use these with envFrom.configMapRef (here is an example).
If you are moving towards orchestration tools, I would say use their offering. In k8s, you would typically use ConfigMaps to manage your application configs. The beauty of this solution is that you can also do Configuration as Code, so you keep your Configmaps version-controlled.
One more thing, NodeJs best practices is to use environment variables. So you can use orchestration offering to mount all your configs to the environment, plus you get secrets encryption for your sensitive info (API keys, etc..)
For anyone if it would help, we went for environment variable approach since we had very minimal parameters to work with and we don't see much change in this approach. If it grows we would be looking at the configmap approach (as also suggested by simon / obanby) above.

How to simple edit local config file throuth API

In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?
You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.

spring cloud dataflow enable security on cloudfoundry

I am trying to setup security for SCDF on PCF
based on documentation is it possible to enabled security for SCDF dashboard
when running examples provided on locally installed server
even the easiest example
java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.2.1.RELEASE.jar \
--security.basic.enabled=true \
--security.user.name=test \
--security.user.password=pass \
--security.user.role=VIEW
it runs correctly - dashboard shows login screen
however
for PCF deployment
it is not documented how to achieve this
I tried on setting environment variables for server app on various ways but no positive results (e.g. SPRING_APPLICATION_JSON etc)
It would be great to understand how to do that.
Best if authorisation use PCF user/password and without need to build the customised server jar.
SCDF tile on PCF would be great help I guess
looking forward to hearing from you
best regards
Wojtek
For Cloud Foundry, it'd be recommended to either use the SSO/UAA integration for a more seamless security solution.
If you're interested in a development/exploration level security solution, you could use the Basic Auth support. It is unclear how the auth credentials are passed to the server, though. The env-var SPRING_APPLICATION_JSON is indeed the preferred approach to pass auth credentials to the server.

Easiest server and database services available for deploying an application (AWS specifically)

I have written a real-time multiplayer game and currently writing its server in NodeJS. I want my game to have login, level up etc, so I need to have a database. This is the first time I am deploying something and I am mostly self taught, so please correct me if I am mixing things up. Since this is my first trial, I do not want to make much commitment right away so I am looking for free options only. And since this should be a real-time game, I need a relatively fast server response. That is why I am looking for the easiest database and server provider that would do and I am aware that with those restrictions I have limited choices and functionality.
As far as I have read online, Heroku seems to be my simplest option for a server (that is why I started writing in NodeJS). However it seems like there is no free database service since all options on https://devcenter.heroku.com/articles/heroku-postgres-plans has monthly fee. I did not want to use Google App Engine since I am new (it certainly is not mentioned as beginner friendly).
So I have found AWS following Free Cloud Database Service for home development post, it seems like I could use Amazon Web Services as a server and database. However most posts I have encountered suggests Google App Engine or Heroku with little mention of AWS. Is this because I am mixing concepts up, or does AWS have drawbacks that I am not aware of? Do you think it is a good idea to use AWS for both as server and database, is it possible to use Heroku as server while using AWS as database or do you have any other suggestion?
Note: Sorry for the question bombardment but those are all related and I am sort of lost in this topic so I had to ask...
Use AWS EC2 for the server and RDS for the database. The reason why people use heroku is that it deploys to a custom url very quickly (it's easy to set up). Setting up AWS requires some knowledge of how servers work, but it's not that complicated (and it's free for small apps). Best of luck!

Securing elasticsearch

I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html

Resources