MongoDB single server security and reliability for production environment? - security

I want to set up MongoDb on a single server, and I've been searching around to make sure I do it right. I have gleaned a few basics on security so far:
Enable authentication (http://www.mongodb.org/display/DOCS/Security+and+Authentication - not enabled by default?)
Only allow localhost connections
In PHP, be sure to cast GET and POST parameters to strings to avoid injection attacks (http://www.php.net/manual/en/mongo.security.php)
I've also picked up one thing about reliability.
You used to have use sharding on multiple boxes, but now you can just enable Journaling? (http://stackoverflow.com/questions/3487456/mongodb-are-reliability-issues-significant-still)
Is that the end of the story? Enable Authentication and Journaling and you are good to go a single server?
Thanks!

If you are running on a single server, then you should definitely have journaling enabled. On 2.0, this is the default for 64-bit builds; on 32-bit builds or older releases (the 1.8.x series) you can enable it with the --journal command-line flag or config file option. Be aware that using journaling will cause MongoDB to use double the memory it normally would, which is mostly an issue on 32-bit machines (since memory there is constrained to around 2GB ordinarily, with journaling it would be effectively halved).
Authentication can help, but the best security measures are to ensure that only machines you control can talk to MongoDB. You can do this with the --bind_ip command-line flag or config file option. You should also set up a firewall (iptables or similar) as an extra measure.
As for injection attacks, you should mostly be safe, so long as you don't blindly convert JSON (or similar structures) into PHP assocs and pass them directly to the MongoDB methods. If you construct the assoc yourself, by processing the $POST or $GET values, you should be safe.

Related

Why use the nodeJS driver to manipulate a MongoDB database?

What's the point of using the driver and JavaScript if we can perform the same query operations a lot easier from the Mongo shell?
In theory any piece of code can as well be achieved through a good shell.
So, why we actually stay away from shell at all cost?
Security concerns, when an application uses shell to perform operations, it is very sensitive to exploits.
Configuration. What if server does not have the needed client, or the client is of a wrong version?
Driver handles many edge cases you might not notice at first glance. Connection loss handling, multiple connections, and such.
Briefly, imagine shell command as a User Interface for administrators. It might be powerful enough for a task, but being a developer you would like to pass this middle-man and communicate directly with the server.
If you program in a certain language (say Java), it's much easier to use the Java driver to access MongoDB rather call the mongodb shell from Java, and execute commands to MongoDB this way (from the shell). The same applies for the JavaScript language and the NodeJS JavaScript host environment in particular. That's why using a driver makes sense.
Actually this whole thing applies not just to MongoDB but to relational databases too (like MySQL, Oracle, etc.).

How can I disable SSL3 on DB2?

Is there away to make DB2 not accept SSL3?
I'm trying to secure couple of DB2 databases I have on couple of servers against the POODLE attack. I know you can do this through the Operating System itself, but my question is if I don't have control over the OS can I at least make DB2 stop using use SSL3?
I have many Java applications and some of it might be using SSL3, I want to be sure these application will fail when they try to use the SSL3 to connect these DB2 databases.
Starting with DB2 LUW 9.7 (I'm assuming you mean LUW here...), you can specify which implementation of SSL you want to use when doing the handshake. It looks like (at least since they implemented this configuration option) DB2 has only ever supported TLS. The configuration option is called ssl_version.
Additionally, you can specify which ciphers you wish to use with the ssl_cipherspecs configuration option. The default is to allow DB2 and the client to negotiate the strongest cipher they both understand.

Securing elasticsearch

I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html

How to use mod_security as standalone?

I've seen the module named standalone in the package of Mod_Security; but I'm not sure how to use it after making and installing it!
Is there any good resources for the start up?
It does not appear to be possible; based on what the ModSecurity website says for its modes of operation:
Reverse proxies are effectively HTTP routers, designed
to stand between web servers and their clients. When you install a
dedicated Apache reverse proxy and add ModSecurity to it, you get a
"proper" network web application firewall, which you can use to
protect any number of web servers on the same network. Many security
practitioners prefer having a separate security layer. With it you get
complete isolation from the systems you are protecting. On the
performance front, a standalone ModSecurity will have resources
dedicated to it, which means that you will be able to do more (i.e.,
have more complex rules). The main disadvantage of this approach is
the new point of failure, which will need to be addressed with a
high-availability setup of two or more reverse proxies.
They are considering it separate by created a dedicated host that is used for proxying to internal hosts.
That works; but it's technically not standalone.
I also filed a bug, and it was confirmed by Felipe Zimmerle:
Standalone is a wrapper to Apache internals that allows ModSecurity to be executed. That wrapper still demand Apache pieces. It is true that you can extend your application using the Standalone version although, you will need some Apache pieces
As you have noted ModSecurity is an add on to an existing web server - originally as an Apache module (hence the name) but now also available for Nginx and IIS.
You can run it in embedded mode (i.e. as part of your main web server) or run it in reverse proxy mode (which is basically the same but you set up a separate web server and run it on that, and then direct all traffic through that).
To be perfectly honest I've never found much point in the reverse proxy method. I guess it does mean you could use it on non-supported web servers (i.e. if you are not using Apache, Nginx nor IIS), and it would reduce the load on your main web server, but other than that it seems like an extra step and infrastructure for no real gains. Some people might also prefer to do the ModSecurity checks in front of several web servers but I woudl argue if you have several web servers, then it is likely for performance and resiliency reasons so why not spread the ModSecurity to this level too rather than creating a single point of failure which might be a bottleneck in front of it. Only other reason would be to apply session level rules (e.g. if people are changing session ids), which might ultimately be spread between different web servers but I've never been convinced that those rules are that great anyway.
When I build ModSecurity I get a mod_security2.so library being built but no separate standalone file(s) so I presume you're just seeing this from hunting through the source (I do see a standalone)? I'd say just because there is a "standalone" folder in the source is not a guarantee that it can run as a completely separate, standalone piece.
I'd question why you want to run this as a standalone app even if you could? Web servers have a lot of functionality in them and depending on ModSecurity, which was written for web security, rather than web security and all the other things a web server does (e.g. be quick, understand HTTP protocol, gzip and ungzip...etc), needlessly stretches what ModSecurity would need to handle. So why not use a web server to take care of this and let ModSecurity do what it's good at?
If you are using ModSecurity then I guess you have web apps (presumably with a web server), so why not use it through that?
Finally is there any problem with installing this through Apache (or Nginx or IIS)? It's free software that's well supported and easy to set up.
I guess ultimately I don't understand the reason for your question. Is there a particular problem you are trying to solve, or is this more just curiosity?

Can CouchDB actually be used for a desktop application?

I'm hoping someone can validate or correct my conclusions here.
I'm looking into writing a small side project. I want to create a desktop application for taking notes that will synchronise to a web-server so that multiple installations can be kept in step and data shared and also so that it can be accessed via a browser if necessary.
I've kind of been half-listening to the noises about CouchDB and I've heard mention of "offline functionality", of desktop-couchdb and of moves to utilise its ability to handle intermittent communications to enable distributed applications in the mobile market. This all led me to believe that it might be an interesting option to look at for providing my data storage and also handling my synchronisation needs, but after spending some time looking around for info on how to get started my conclusion is that I've got completely the wrong end of the stick and the reality is that:
There's no way of packaging up a CouchDB instance, distributing it as part of a desktop application and running it in the context of that application to provide local storage and synchronisation to a central database.
Am I correct here? If so is there any technology out there that does this sort of thing or am I left just rolling my own local storage and maybe still using CouchDB on the server?
Update (2012/05): check out the new TouchDB projects from Couchbase if you are targeting Mac OS X and/or iOS or Android. These actually use SQLite under the hood (at least for now) but can replicate to/from a "real" CouchDB server. Another clientside alternative that is finally starting to mature is PouchDB, which runs in IndexedDB capable browser engines. Using these or using them to inspire similar port to another desktop platform is now becoming a better-trod path.
Original answer:
There's no way of packaging up a
CouchDB instance, distributing it as
part of a desktop application and
running it in the context of that
application to provide local storage
and synchronisation to a central
database.
At this point in time, your statement is practically correct although it is possible to include CouchDB in an app — for an example see CouchDBX.app which is a thin wrapper around a prefixed bundle of CouchDB and all its dependencies.
The easiest way to build a CouchDB app is to assume that the user will already have a CouchDB server running. This is easier than it sounds, especially with Couchone's hosting or a prebuilt app like CouchDBX on OS X or DesktopCouch on Ubuntu. This latter is especially interesting, because if I understand correctly it is included by default with Ubuntu these days, and automatically spins up a CouchDB server per-user when you query its port via D-Bus. Something similar could (and should) be done on OS X using launchd and Bonjour.
So as you write, you either would design your app to store data in a local format and optionally sync with a CouchDB service you provide or you'd have to build and bundle all of Erlang, SpiderMonkey and CouchDB together with your app along with some scripts to make sure it was running when needed. This is possible but obviously neither of these are ideal, and believe me you're not the only one wanting a simpler solution for desktop-oriented apps!

Resources