cassandra on azure, how to configure security groups - azure

i just installed the datastax cluster of cassandra.
i have a question regarding the security groups and how to limit access.
currently, there are no security groups to the vnet and to all vms. so everyone can connect to the cluster.
the problem starts when i try to set a security group on the subnet. this is because the http communication of the cassandra nodes is (i think) used with the public ip and not the internal ip. i get an error in the opscenter that the http connection is down.
the question is how can i restrict the access to the cluster (for a specific ip), but provide access to all the cassandra nodes to work.

Its good practice to exercise security when running inside any public cloud whether its Azure, GCE, or AWS etc. Enabling internode SSL is a very good idea because this will secure the internode gossip communications. Then you should also introduce internal authentication (at the very least) so you require a user/password to login to cqlsh. I would also recommend using client to node SSL, 1-way should be sufficient for most cases.
I'm not so sure about Azure but I know with AWS and GCE the instances will only have a local internally routed IP (usually in the 10.0.0.0/8 private range) and the public IP will be via NAT. You would normally use the public IP as the broadcast_address especially if you are running across different availability zones where the internal IP does not route. You may also be running a client application which might connect via the public ip so you'd want to set the broadcast_rpc_address as public too. Both of these are found in the cassandra.yaml. The listen_address and rpc_address are both IPs that the node will bind to so they have to be locally available (i.e. you cant bind a process to a IP thats not configured on an interface on the node).
Summary
Use internode SSL
Use client to node SSL
Use internal authentication at the very minimum (Ldap and Kerberos are also supported)
Useful docs
I highly recommend following the documentation here. Introducing security can be a bit tricky if you hit snags (whatever the application). I always start of making sure the cluster is running ok with no security in place then introduce one thing at a time, then test, verify and then introduce the next thing. Dont configure everything at once!
Firewall ports
Client to node SSL - note require_client_auth: true should be false for 1-way.
Node to node SSL
Preparing SSL certificates
Unified authentication (internal, LDAP, Kerberos etc)
Note when generating SSL keys and certs typically you'd just generate the one pair and use it across all the nodes when you have node to node SSL. Otherwise if you introduce a new node you'll have to import the new cert into all nodes, which isn't really scalable. In my experience working with organisations using large clusters this is how they manage things. Also client applications may well use just same key or a different one at least.
Further info / reading
2-way SSL is supported, but its not as common as 1-way. This is typically a bit more complex and switched on with the require_client_auth: true in the cassandra.yaml
If you're using OpsCenter for SSL, the docs (below) will cover things. Note that essentially its in two places:
SSL between opscenter and the agents and the cluster (same as client to node SSL above)
SSL between OpsCenter and the Agents
OpsCenter SSL configuration
I hope this helps you towards achieving what you need to!

Related

Is it safe to have the TCP port 2376 open by docker-machine (generic driver) to the internet?

I'm experimenting with Docker Machine, trying to set it up on an already existing machine using docker-machine create --driver generic. I noticed that it reconfigures the firewall so that port 2376 is open to the Internet. Does it also set up proper authentication, or is there a risk that I'm exposing root access to this machine as a side effect?
By default, docker-machine configures mutual TLS (mTLS) to both encrypt the communication, and verify the client certificate to limit access. From the docker-machine documentation:
As part of the process of creation, Docker Machine installs Docker and configures it with some sensible defaults. For instance, it allows connection from the outside world over TCP with TLS-based encryption and defaults to AUFS as the storage driver when available.
You should see environment variables configured by docker-machine for the DOCKER_HOST and DOCKER_TLS_VERIFY to point to a remote host and use mTLS certificates. Typically, port 2375 is an unencrypted and unsecured port that should never be used, and 2376 is configured with at least TLS, and hopefully mTLS (without the mutual part to verify clients, security is non existent). For more details on what it takes to configure this, see https://docs.docker.com/engine/security/https/
All that being said, docker with mTLS is roughly the same security as SSH with only key pair logins allowed. Considering the access it grants to the host, I personally don't leave either of these exposed to the internet despite being fairly secure. When possible, I use IP whitelists, VPNs, or other measures to limit access. But many may feel relatively safe leaving these ports exposed.
Unless you are using certificates to secure the socket, it's prone for attacks. See more info here.
In the past, some of my test cloud instances were compromised and turned into bitcoin miners. In one instance, since there were keys available on that particular host, the attacker could use those keys to create new cloud instances.

Disable Microservice initial exposed port after configuring it in a gateway

Hello I've been searching everywhere and did not found a solution to my problem, which is how can I access my API through the gateway configured endpoint only, currently I can access to my api using localhost:9000, and localhost:8000 which is the Kong gateway port, that I secured and configured, but what's the point of using this gateway if the initial port is still accessible.
Thus I am wondering is there a way to disable the 9000 port and only access to my API with KONG.
Firewalls / security groups (in cloud), private (virtual) networks and multiple network adapters are usually used to differentiate public vs private network access. Cloud vendors (AWS, Azure, etc) and hosting infrastructures usually have such mechanisms built in, e.g. Kubernetes, Cloud Foundry etc.
In a productive environment Kong's external endpoint would run with public network access and all the service endpoints in a private network.
You are currently running everything locally on a single machine/network, so your best option is probably to use a firewall to restrict access by ports.
Additionally, it is possible to configure separate roles for multiple Kong nodes - one (or more) can be "control plane" nodes that only you can access, and that are used to set and review Kong's configuration, access metrics, etc.
One (or more) other Kong nodes can be "data plane" nodes that accept and route API proxy traffic - but that doesn't accept any Kong Admin API commands. See https://konghq.com/blog/separating-data-control-planes/ for more details.
Thanks for the answers they give a different perspectives, but since I have a scalla/play microservice, I added a special Playframework built-in http filter in my application.conf and then allowing only the Kong gateway, now when trying to access my application by localhost:9000 I get denied, and that's absolutely what I was looking for.
hope this answer gonna be helpful for future persons in this same situation.

Securing a Solr cloud?

I have to prove my SolrCloud is secure.
From my understanding of what I am reading I can secure the Solr instances talking to each other via basic authentication and SSL which is great, its secure, it works.
However, I can't see anything that will allow me to secure Zookeeper - or am I mistaken? Is there anything in an open Zookeeper that will allow a malicious user on my internal network to "hack" my SolrCloud, or is it the case that Zookeeper doesn't have anything that needs to be hidden?
Regarding securing ZooKeeper, you may want to check ZooKeeper access control using ACLs link.
What we do at Measured Search for our customers who are using our Solr-as-a-Service platform, we allow them to restrict access to Zookeeper with IP filtering. They can either specify a specific IP address or a CIDR (range) that can have access to Zookeeper.
http://docs.measuredsearch.com/security/
That way, they can secure their Solr instances independently of Zookeeper.

Can kube-apiserver allow the unsecure connection outside of localhost?

I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as kube-proxy and kubelet to the master's kube-apiserver. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs.
You can set the flag --insecure-bind-address=0.0.0.0 when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost).

Kerberos Fully qualified domain name

I'm currently looking to configure a Kerberos V realm and wondering about the risk of having systems in my environment that are not in FQDN (Fully Qualified Domain Name).
A lot of my search mention to use FQDN but doesn't mention what are the risk of not being in FQDN.
It's not exactly a risk in the security sense, but it will create much confusion in configuring various clients and servers.
Kerberos depends on the ability of the client and server to agree on the service name to be used by some process that is outside the kerberos protocol. In other words if I want to use kerberos telnet to some host, I need to know in advance what service principal that host is using in it's /etc/krb5.keytab. There is no way in the kerberos protocol for the client to learn this.
By default kerberos clients usually do a gethostbyname, then gethostbyaddr on the ip address returned and then use that hostname to construct a service principal. This is where you will run into problems. You might try turning off DNS canonicalization altogether ( it's an option in krb5.conf ).
There is also the problem of default realm based on hostname, but that's a much simpler one to solve using values in /etc/krb5.conf.

Resources