couchbase security, can I restrict the moxi port 11211 to local host - security

I feel like I must be really thick here but I'm struggling with couchbase configuration.
I am looking to replace memcached with couchbase and am wanting to secure things more to my liking, on the server their are a number of applications that are set up to use memcached so it needs to be as drop in as possible without changing the applications configuration.
What I have done is installed couchbase on each of the webservers like I did with memcached and so far with my config everything is working.
The problem I have is port 11211 is open to the world a large and this terrifies me, either I'm thick or I'm not looking in the right place but I want to restrict port 11211 to only be listening on localhost 127.0.0.1.
Now couchbase seems to have reams and reams of documentation but I cannot find how to do this and am starting to feel like you need to be a couchbase expert to make simple changes.
I'm aware that the security of couchbase is to use password protected buckets with SASL auth but for me this isn't an option.
While I'm on the subject and if I can change the listening interface, are their any other ports with couchbase that don't need to be open to the world and can be restricted to local host.
Many many thanks in advance for any help, I'm at my wits end.

Let's back up a bit. Unlike memcached, Couchbase is really meant to be installed as a separate tier in your infrastructure and as a cluster even if you are just using it as a high availability cache. A better architecture would be to have Couchbase installed on its own properly sized nodes (in a cluster using Couchbase buckets) and then install Moxi (Couchbase's memcached proxy) on each web server that will talk to the Couchbase cluster on your app's behalf. This architecture will give you the functionality you need, but then give you the high availability, replication, failover, etc that Couchbase is known for.
In the long run, I would recommend that you transition your code to using Couchbase's client SDKs to access the cluster as that will give you the most benefits, performance, resilience, etc. for your caching needs. Moxi is meant more as an interim step, not a final destination.

Related

Hosting NodeJS service application

I would like to build a service using NodeJS. However, this question is more of an architectural nature. Lets say I have 2 companies with their own network security. Company A has a SQL Server instance, while Company B would host the NodeJS service application. In order to get data, the NodeJS service has to go to the SQL Server instance in Company A. Is this considered "bad practice"? If thats the case, whats the alternative? As a note, there is also the option of connecting to the SQL Server instance from AWS.
From an architectural standpoint, it's definitely not desired for an application to access a database through multiple network layers (potentially via Internet), because of multiple reasons, like: latency overhead, security (maybe), management overhead (if the DB is owned by another company).
Generally, the DB should be as close as possible to the app, because usually it's the main bottleneck of a system, and it will impact the throughput of the application at some point.
However, the right answer here depends on the requirements of your app. If the traffic volumes are not very big and the performance hit is acceptable, then you can use that approach (with all pros and cons it may have)
Ideally you should not do the same. You may setup a replica of Database on your application network. To sync the replica, you may setup VPN connection.

How to use MongoDB on Amazon Web Services

In order to use MongoDB on my node.js AWS EC2 instance do I simply install MongoDB and create a database within the instance via the command line after logging in via SSH?
In other words do I simply create a DB in the EC2 instance for my web app just as I would locally on my machine?
just from long (and at the beginning painful) experience with EC2 & MongoDB..here are some gotcha's.
I am assuming you are just starting off from your question - so I am going to assume a minimal setup:
If you install MongoDB on a server with access to the Internet, then make sure you also apply MongoDB roles to your DB. Do not, I repeat, do not leave it open to the world. Admin and read/write roles are critical here, and MongoDB docs will help you. BTW, even if it is totally secure behind a firewall and other such things, always use roles. They are your last line of defense.
Study and understand exactly how security groups work, in order to limit Inbound and Outbound.
If you can, use the Elastic IP. It will save you many headaches if you move servers, not the least of which is that your IP will not change.
When you gear up to a front facing Internet server, and Mongo behind it, be it with Sharding, Clusters etc. read up on the NAT gateway. It is confusing at first, but the NAT Gateway (not the NAT instance), is a decent setup in one configuration or another.
Take snapshots or complete images of your environment when you change it. This is great for backup, and also when you move to a more robust server, it will save you a great deal of work.
If you have not already, try using MongoBooster or RoboMongo. They will help you immensely with your Mongo work.
Good luck and enjoy!
The actual AWS implementation of MongoDB is DocumentDB, which from what i can tell is built on the opensource version of MongoDB version 3.6, so newer MongoDb features are/might/will not be supported.
An interesting article comparing DocumentDb with MongoDb Atlas(mongoDm cloud solution):
https://medium.com/#michaelrbock/nosql-showdown-mongodb-atlas-vs-aws-documentdb-5dfb00317ca2
In the end if you really want MongoDB on AWS my opinion is you should just install it on a EC2 machine, I've done it via DocumentDB and some mongodb commands don't work, or chose AWS own NOSQL solution DynamicDB instead, DocumentDB just seems to be up there for competition with MongoDB Atlas cloud solution or just for having some dedicated MongoDB for companies that use it and want to move to AWS.
You have different alternatives. To answer your question: yes, you can do it that way. But, there is also an official guide by Amazon to set up a MongoDB cluster on AWS.
Also, if you only need a NoSQL database, you should also check DynamoDB, developed by Amazon. That would eliminate the need of an EC2 instance for the database. For more info, check the official docs.

Mongodb hosting remote vs on the same network

What is the killer reason to use remote db hostring services for MongoDB (like compose.io) for nodejs application VS hosting MongoDB on the same network (in the same datacenter, etc), for example when using PAAS providers (like modulus.io) which offer "integrated" MongoDB hosting .
What percentage of speed/perfomance may degrade when using internet remote DBs, how do DB providers you solve this? How to make right decision on this?
The reason you use something like compose.io is that you don't want to deal with that on your own and have experts taking care of it that know what they are doing. In the best case with support so you can take further advantage of those experts. And that's the only reason.
If you use Modulus that has this anyway and you run your application there as well - even better. There is no real reason to run your node application on Modulus and your mongodb on a different cloud hosting service.
In practice that probably doesn't matter as much because they all use AWS anyway ;)
Important: If they DON'T run in the same network make sure your mongoDB is protected properly(!!). If you do run in the same network just make sure the mongoDB is not accessible from the outside at all which is def the better solution!
Hope that helps

Securing Cassandra on a public machine?

I am considering a Cassandra cluster deployment to Google Compute Engine. However one of our principal db clients would be an App Engine app. Since GCE firewalls do not include App Engine instances (meaning App Engine instances are considered "outside" the firewall) we would need to open ports in the firewall to the Cassandra nodes, effectively putting our database on the public Internet.
Is this reasonable to do? I have read up on Cassandra's authentication scheme (http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/securityTOC.html) but I'm certainly not an expert and thus I don't trust that I can properly evaluate whether this scheme is strong enough to protect a publicly available database server.
If this is a bad idea, what's our best alternative? Writing some kind of authenticating app in front of each database is rather unappealing since (1) we obviously want the db to be fast, so any extra steps in the way are counter to that goal, and (2) it might necessitate custom changes to the standard Cassandra client libs/programs.
Is there a standard practice here?

Securing elasticsearch

I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html

Resources