Is there away to make DB2 not accept SSL3?
I'm trying to secure couple of DB2 databases I have on couple of servers against the POODLE attack. I know you can do this through the Operating System itself, but my question is if I don't have control over the OS can I at least make DB2 stop using use SSL3?
I have many Java applications and some of it might be using SSL3, I want to be sure these application will fail when they try to use the SSL3 to connect these DB2 databases.
Starting with DB2 LUW 9.7 (I'm assuming you mean LUW here...), you can specify which implementation of SSL you want to use when doing the handshake. It looks like (at least since they implemented this configuration option) DB2 has only ever supported TLS. The configuration option is called ssl_version.
Additionally, you can specify which ciphers you wish to use with the ssl_cipherspecs configuration option. The default is to allow DB2 and the client to negotiate the strongest cipher they both understand.
Related
I've been experimenting with trying to secure a Elasticsearch cluster with basic auth and TLS.
I've successfully been able to do that using Search-Guard. The problem occurs with the Couchbase XDCR to Elasticsearch.
I'm using a plugin called elasticsearch-transport-couchbase which perfectly fine without TLS and Basic Auth enabled on the Elasticsearch cluster. But when enabling that with Search-Guard I am not able to make that work.
As far as I can tell the issue lies with the elasticsearch-transport-couchbase plugin. This has also been discussed previously in some issues on their Github repo.
It is also the only plugin what I can find that can be used for XDCR from Couchbase.
I'm curious about other peoples experience with this. Is there anyone who have been in the same situation as I and been able to setup a XDCR from Couchbase to Elasticsearch with TLS?
Or perhaps there are some other more suitable tools that I can use that I have missed?
The Couchbase transport plugin doesn't support XDCR TLS yet, it's on the roadmap, but isn't going to happen soon. Search-guard adds SSL to the HTTP/REST endpoint in ES, but the plugin opens its own endpoint (on port 9091 by default) which Search-guard doesn't touch. I'll take a look at whether it's possible to extend search-guard to apply to the transport plugin - the main problem is on the Couchbase XDCR side, which doesn't expect SSL on the target endpoint.
Version 4.0 of the Couchbase Elasticsearch connector supports secure connections to Couchbase Server and/or Elasticsearch.
Reference: https://docs.couchbase.com/elasticsearch-connector/4.0/secure-connections.html
A small update. We went around the issue by setting up a stunnel with xinetd. So all communication with ELS have to go through the stunnel where the TLS will terminate.
We blocked access to port 9200, and restricted 9091 to the Couchbase-cluster host and 9300 to the other ELS nodes only.
Seems to work good.
I feel like I must be really thick here but I'm struggling with couchbase configuration.
I am looking to replace memcached with couchbase and am wanting to secure things more to my liking, on the server their are a number of applications that are set up to use memcached so it needs to be as drop in as possible without changing the applications configuration.
What I have done is installed couchbase on each of the webservers like I did with memcached and so far with my config everything is working.
The problem I have is port 11211 is open to the world a large and this terrifies me, either I'm thick or I'm not looking in the right place but I want to restrict port 11211 to only be listening on localhost 127.0.0.1.
Now couchbase seems to have reams and reams of documentation but I cannot find how to do this and am starting to feel like you need to be a couchbase expert to make simple changes.
I'm aware that the security of couchbase is to use password protected buckets with SASL auth but for me this isn't an option.
While I'm on the subject and if I can change the listening interface, are their any other ports with couchbase that don't need to be open to the world and can be restricted to local host.
Many many thanks in advance for any help, I'm at my wits end.
Let's back up a bit. Unlike memcached, Couchbase is really meant to be installed as a separate tier in your infrastructure and as a cluster even if you are just using it as a high availability cache. A better architecture would be to have Couchbase installed on its own properly sized nodes (in a cluster using Couchbase buckets) and then install Moxi (Couchbase's memcached proxy) on each web server that will talk to the Couchbase cluster on your app's behalf. This architecture will give you the functionality you need, but then give you the high availability, replication, failover, etc that Couchbase is known for.
In the long run, I would recommend that you transition your code to using Couchbase's client SDKs to access the cluster as that will give you the most benefits, performance, resilience, etc. for your caching needs. Moxi is meant more as an interim step, not a final destination.
I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html
I want to set up MongoDb on a single server, and I've been searching around to make sure I do it right. I have gleaned a few basics on security so far:
Enable authentication (http://www.mongodb.org/display/DOCS/Security+and+Authentication - not enabled by default?)
Only allow localhost connections
In PHP, be sure to cast GET and POST parameters to strings to avoid injection attacks (http://www.php.net/manual/en/mongo.security.php)
I've also picked up one thing about reliability.
You used to have use sharding on multiple boxes, but now you can just enable Journaling? (http://stackoverflow.com/questions/3487456/mongodb-are-reliability-issues-significant-still)
Is that the end of the story? Enable Authentication and Journaling and you are good to go a single server?
Thanks!
If you are running on a single server, then you should definitely have journaling enabled. On 2.0, this is the default for 64-bit builds; on 32-bit builds or older releases (the 1.8.x series) you can enable it with the --journal command-line flag or config file option. Be aware that using journaling will cause MongoDB to use double the memory it normally would, which is mostly an issue on 32-bit machines (since memory there is constrained to around 2GB ordinarily, with journaling it would be effectively halved).
Authentication can help, but the best security measures are to ensure that only machines you control can talk to MongoDB. You can do this with the --bind_ip command-line flag or config file option. You should also set up a firewall (iptables or similar) as an extra measure.
As for injection attacks, you should mostly be safe, so long as you don't blindly convert JSON (or similar structures) into PHP assocs and pass them directly to the MongoDB methods. If you construct the assoc yourself, by processing the $POST or $GET values, you should be safe.
I am using firebird server 2.50. As far as I know there is no way to encrypt a database in Firebird. So how to secure the user data?
Manually encrypting all data before saving would cause trouble since i will not be able to use something like "starting with".
I use CentOs for Database servers. These servers are communicating with an Application Server which runs on Windows Server 2008.
Encryption is one kind of several protection measures which can be done against potential adversaries. And there are other methods too. You need common security analysis before you go with decision whether to encrypt or not, and if not than what. You have to look who are adversaries, where they could hit, etc-etc-etc. Blind use of encryption may be waste of resource/time/money/etc. Do security analysis first.
DB encryption is possible in version 3:
With Firebird 3 comes the ability to encrypt data stored in database. Not all of the database file is encrypted:
just data, index and blob pages.
To make it possible to encrypt a database you need to obtain or write a database crypt plug-in.
Refer to Firebird-3.0.0_Alpha1-ReleaseNotes for details