I have only one server, Solr server. Is it possible to enable Authentication and Authorization for Solr 5 without installing ZooKeeper?
I know that one possible way is to configure, for example, IP table and give access to the server from a certain host (or hosts). But I am interested in Solr's capabilities without any external servers like ZooKeepers.
You can configure your container to do authentication yourself, but the only bundled support in Solr requires running Solr in SolrCloud mode (meaning that it has to either use an external Zookeeper or the internal, bundled one). From [the reference guide about Authentication and Authorization]:(https://cwiki.apache.org/confluence/display/solr/Authentication+and+Authorization+Plugins)
To use these plugins, you must create a security.json file and upload it to ZooKeeper. This means that authentication and authorization is supported in SolrCloud mode only.
You could also bind Solr to localhost (as Solr shouldn't be exposed on public ips) and then use nginx or Apache to reverse proxy any requests and perform authentication. Configuration would depend on the chosen httpd and how it configures Basic HTTP Authentication.
You don't have to install any external zookeeper to enable authentication and authorization in your solr server. Internal zookeeper works perfectly fine.
http://lucidworks.com/blog/2015/08/17/securing-solr-basic-auth-permission-rules/
I hope this helps.
Related
We have hosted an elastic search V7.11.2 on our non-cloud ecosystem. We are using presto V0.248 to connect to it.
Our Elastic Search is secured with a basic authentication system (currently is NON-SSL) we are able to connect but due to authentication enabled we are getting 401 HTTP status. As per the documentation we currently don't see any header adding or option to add username or password to presto connector.
Any help or pointer on how to enable the same.
Prestodb doesn't support user/password authentication for Elasticsearch. You may want to look at Trino (a fork of prestodb by its creators and major contributors, formerly known as PrestoSQL), which has had this feature since version 337 (latest version is 354): https://trino.io/docs/current/connector/elasticsearch.html#elasticsearch-auth-user
I am confused as to why when building a node/express application I dont need to use another web server but when working with Java or Spring or a python backend usually a webserver like nginx or apache is used. I am getting confused as to what Apache and nginx do, don't they just handle HTTP requests just like we do in node or express? But then in Spring there are controllers that handle the requests so why do we need to have JBoss or Apache running?
In the old times there was a strict separation between the "application" and the "application server"/"web server". Application servers (like JBoss) provided the configuration of the resources (connections to DB for example), etc. to the application deployed on them. Web servers (like Apache) provided the configuration for possibly multiple web applications hosted on them.
Currently, in the era of self-hosted apps (which means: apps that contains embedded HTTP server) you often don't need a separate webserver. But tools like Nginx are still used for example as load-balancers, etc. Application servers (JBoss, etc.) are not often used nowadays, because of the embedded HTTP servers that you're able to configure without asking Ops-people to do it for you - it's quicker and more convenient.
If you are writing a NodeJS application then you don't "need" another server, except maybe when you are scaling a production ready deployment
The simple answer is that express, Apache, nginx and JBoss are all web server. Since all of these are web servers they each can pretty much do the work of each other. However, each of them have strengths and weaknesses which is why often times they can work together. For example a common practice is to place an express server behind nginx to let nginx handle load balancing, static assets and SSL termination which nginx is very good at but maybe let API and websocket connections fall through to the express server which is what express is typically good at.
A developer may pick Apache if they are working with PHP because the integration is so good but pick JBoss if they are working with Java EE.
I've been experimenting with trying to secure a Elasticsearch cluster with basic auth and TLS.
I've successfully been able to do that using Search-Guard. The problem occurs with the Couchbase XDCR to Elasticsearch.
I'm using a plugin called elasticsearch-transport-couchbase which perfectly fine without TLS and Basic Auth enabled on the Elasticsearch cluster. But when enabling that with Search-Guard I am not able to make that work.
As far as I can tell the issue lies with the elasticsearch-transport-couchbase plugin. This has also been discussed previously in some issues on their Github repo.
It is also the only plugin what I can find that can be used for XDCR from Couchbase.
I'm curious about other peoples experience with this. Is there anyone who have been in the same situation as I and been able to setup a XDCR from Couchbase to Elasticsearch with TLS?
Or perhaps there are some other more suitable tools that I can use that I have missed?
The Couchbase transport plugin doesn't support XDCR TLS yet, it's on the roadmap, but isn't going to happen soon. Search-guard adds SSL to the HTTP/REST endpoint in ES, but the plugin opens its own endpoint (on port 9091 by default) which Search-guard doesn't touch. I'll take a look at whether it's possible to extend search-guard to apply to the transport plugin - the main problem is on the Couchbase XDCR side, which doesn't expect SSL on the target endpoint.
Version 4.0 of the Couchbase Elasticsearch connector supports secure connections to Couchbase Server and/or Elasticsearch.
Reference: https://docs.couchbase.com/elasticsearch-connector/4.0/secure-connections.html
A small update. We went around the issue by setting up a stunnel with xinetd. So all communication with ELS have to go through the stunnel where the TLS will terminate.
We blocked access to port 9200, and restricted 9091 to the Couchbase-cluster host and 9300 to the other ELS nodes only.
Seems to work good.
I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html
How to make acces to neo4j REST standalone server by LDAP? By default, there is no any security things in neo4j. In my opinion, i should run Apache Http server over neo4j, which use jetty inside. But I also know, that jetty can do LDAP, but it is part of neo4j, so its hard to configure. Wich way should I go?
Right now I think there are two possibilities. The first, as you mention, is to front Neo4j with Apache and let Apache take on the security workload.
The other is much more invasive, and that's to write a filter for JAX-RS (or a servlet filter) and get that registered with Jersey. If you're comfortable with hacking a bit of code, the second gives you a single box solution.