I am trying to setup security for SCDF on PCF
based on documentation is it possible to enabled security for SCDF dashboard
when running examples provided on locally installed server
even the easiest example
java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.2.1.RELEASE.jar \
--security.basic.enabled=true \
--security.user.name=test \
--security.user.password=pass \
--security.user.role=VIEW
it runs correctly - dashboard shows login screen
however
for PCF deployment
it is not documented how to achieve this
I tried on setting environment variables for server app on various ways but no positive results (e.g. SPRING_APPLICATION_JSON etc)
It would be great to understand how to do that.
Best if authorisation use PCF user/password and without need to build the customised server jar.
SCDF tile on PCF would be great help I guess
looking forward to hearing from you
best regards
Wojtek
For Cloud Foundry, it'd be recommended to either use the SSO/UAA integration for a more seamless security solution.
If you're interested in a development/exploration level security solution, you could use the Basic Auth support. It is unclear how the auth credentials are passed to the server, though. The env-var SPRING_APPLICATION_JSON is indeed the preferred approach to pass auth credentials to the server.
Related
I am gonna deploy a node js service in openshift and there are few properties such as database configs and app properties which I need to externalize.
I have java applications running as part of solution which uses config server as config store and GIT as source. I have seen libs for npm to integrate with spring config server.
So, I am looking for best practices here, what would be best approach for externalizing configs in nodejs in orchestration tools like k8s or openshift. Or can we go with config server int the above scenario?
Please let know of any info , any pointers are highly appreciated.
There are multiple possibilities, one being the Cloud Config Server as you noted. However, the naive approach according to the Twelve-Factor App, the config should be stored in the environment:
The twelve-factor app stores config in environment variables
In OpenShift / Kubernetes, this means that we will store the configuration in the Deployment itself, in ConfigMaps or Secrets and then use these with envFrom.configMapRef (here is an example).
If you are moving towards orchestration tools, I would say use their offering. In k8s, you would typically use ConfigMaps to manage your application configs. The beauty of this solution is that you can also do Configuration as Code, so you keep your Configmaps version-controlled.
One more thing, NodeJs best practices is to use environment variables. So you can use orchestration offering to mount all your configs to the environment, plus you get secrets encryption for your sensitive info (API keys, etc..)
For anyone if it would help, we went for environment variable approach since we had very minimal parameters to work with and we don't see much change in this approach. If it grows we would be looking at the configmap approach (as also suggested by simon / obanby) above.
As said in the title, is there a way to add application users in Thorntail WilFly server, much like you would do with "add-user.sh -a" script in the full server distribution?
I understand you can provide an external configuration file to Thorntail but that seems a bit of overhead just for specifying where users are located.
Thanks
The answer by Thomas Herzog is very good from a conceptual point of view -- I'd especially agree with securing the application using an external Keycloak, potentially with the help of MicroProfile JWT. I'm just gonna provide a few points in case you decide not to.
You can define users directly in project-defaults.yml, like this:
thorntail:
management:
security-realms:
ApplicationRealm:
in-memory-authentication:
users:
bob:
password: tacos!
in-memory-authorization:
users:
bob:
roles:
- admin
The project-defaults.yml file doesn't have to be external to the app, you can build it directly into it. Typically, in your source code, the file will be located in src/main/resources, and after building, it will be embedded inside the -thorntail.jar. It can be external, of course, and if this is something else than a throwaway prototype or test, sensitive data like this should be external.
You can also use the .properties files from WildFly:
thorntail:
management:
security-realms:
ApplicationRealm:
properties-authentication:
path: .../path/to/application-users.properties
properties-authorization:
path: .../path/to/application-roles.properties
It depends on for what you need the users? Thorntail creates standalone Microservices, which are different to hosted applications in a wildfly-server.
Is there are a management console in thorntail?
Yes there is, but I have never used it.
https://docs.thorntail.io/2.2.0.Final/#_management
https://docs.thorntail.io/2.2.0.Final/#_management_console
The users you maybe able to create there shouldn't be persistent, because there is no wildfly-server installation as you are used to with a standalone wildfly-server installation, it is all packaged in the jar. A Microservice shouldn't need to be configured after its deployment anymore, at least not like this.
How to secure my application?
I would recommend to use an external user management via keycloak, which is integrated in thorntail via the keycloak fraction. With the keycloak fraction you can define security constraints to your endpoints similar in a web.xml.
https://docs.thorntail.io/2.2.0.Final/#_keycloak
Another way is to use the security fraction which provides you JAAS support for your microservice.
https://docs.thorntail.io/2.2.0.Final/#_security
The configuration is done via the thorntail specific project-defaults.yml configuration file, where you can configure the fractions via YAML.
What is a thorntail fraction?
A thorntail fraction is similar to a spring boot start dependency with spring, whereby the fraction provides the API for the developement and bundles the implementation and integration into thorntail. The fraction actually is a jboss module which is packaged into the standalone Microservice during re-packaging phase.
Where can I find examples?
See the following links for examples how to use security in thorntail. You should take a look at them.
https://github.com/thorntail/thorntail-examples/tree/master/security
Take a look at the src/main/resources/projects-defaults.yml which contains the configuration for thorntail fractions and the pom.xml which defines the used fractions.
I've been bitten by the test-driven infrastructure bug. My current project is using Azure, including SQL Azure, Azure tables, cloud services, and mobile services. Configuring an entire environment is somewhat complex. Now I'm looking for a testing framework that I can use to verify that the environment is configured correctly. Something like "Confirm that there's a mobile service endpoint named foo, that is has APNS and GCM endpoints, and that there is a Google API key and Apple push certificate associated." There is more, but that is complex enough that existing tools don't seem to cover it but simple enough to describe in a single sentence.
Because of the number of products, I have to use both the PowerShell module and the cross-platform CLI to script the setup. The cross-platform CLI looks like the easiest way to get data out (it uses Node and can easily dump JSON data), but I'm at a loss as to how to even start with testing JSON dumps from a Node module that was never really intended to be used as a module.
The PowerShell module is buggy and doesn't have any ability to read mobile services information.
There is a ruby gem for managing Azure, but it's very limited. So my hope of being able to work all in Ruby was dashed. There too, I'm not sure how one would use ServerSpec to test a remote node without actually running anything on the remote node.
I'd like to stay within the realm of something that would be understandable by another Azure developer (e.g. JavaScript, PowerShell, and potentially Ruby) and not have to start from scratch with something like Erlang or Brainf**k.
Corey - big area of ongoing build out on Azure right now which is why you are finding limited support. Resource Manager is aimed at driving programmable infrastructure (http://azure.microsoft.com/en-us/documentation/articles/xplat-cli-azure-resource-manager/) but doesn't yet encapsulate all Azure service offerings.
There is also the Management Libraries (for .Net) - http://www.bradygaster.com/post/getting-started-with-the-windows-azure-management-libraries or at the most basic of levels there is the pure REST API that you can code directly against if there are bits missing from the above (which is likely) - http://msdn.microsoft.com/en-us/library/azure/ee460799.aspx
I have a Java web app running on EC2 under Tomcat (a WAR) that requires various sensitive configuration parameters - for example, the credentials associated with various other AWS services. I had been setting these as environment variables, but then discovered that running Tomcat as a service removes almost all environment variables. So currently I use a simple configuration file to store these values.
I don't believe this is a wise choice going forward, however, and would like to find an alternative. What is the right way to handle this kind of sensitive information?
IAM Roles are going to be your best friend here. The official docs here will point you in the right direction. There's also a post on the AWS security blog about it here.
I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html