Can kube-apiserver allow the unsecure connection outside of localhost? - coreos

I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as kube-proxy and kubelet to the master's kube-apiserver. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs.

You can set the flag --insecure-bind-address=0.0.0.0 when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost).

Related

Why telnet and nc command report connection works in azure kubernetes pod while it shouldn't

I have an Azure AKS kubernetes cluster. And I created a Pod with Ubuntu container from Ubuntu image and several other Pods from java/.net Dockerfile.
I try to enter to any of the PODs (including the ubuntu one), and execute telnet/nc command in the pod to a remote server/port to validate the remote connection, it's very weird that no matter on which remote server IP and port I choose, they always report connection succeed, but actually the IP/Port should not work.
Here is the command snapshot I executed: From the image You will find I'm telneting to 1.1.1.1 with 1111 port number. I could try any other ip and port number, it always report connection succeed. And I tried to connect to all the other pods in the AKS cluster, they are all the same. I also tried to re-create the AKS kubernetes cluster by choosing CNI network instead of the default Kubenet network, still the same. Could anyone help me on this? Thanks a lot in advance
I figured out the root cause of this issue, it's because I installed Istio as service mesh, and turn out this is the expected behavior by design by referring this link: https://github.com/istio/istio/issues/36540
However, although this is by design of Istio, I'm still very interested in how to easily figure out whether a remote ip/port tcp connection works or not in Istio sidecar enabled POD.

Is it safe to have the TCP port 2376 open by docker-machine (generic driver) to the internet?

I'm experimenting with Docker Machine, trying to set it up on an already existing machine using docker-machine create --driver generic. I noticed that it reconfigures the firewall so that port 2376 is open to the Internet. Does it also set up proper authentication, or is there a risk that I'm exposing root access to this machine as a side effect?
By default, docker-machine configures mutual TLS (mTLS) to both encrypt the communication, and verify the client certificate to limit access. From the docker-machine documentation:
As part of the process of creation, Docker Machine installs Docker and configures it with some sensible defaults. For instance, it allows connection from the outside world over TCP with TLS-based encryption and defaults to AUFS as the storage driver when available.
You should see environment variables configured by docker-machine for the DOCKER_HOST and DOCKER_TLS_VERIFY to point to a remote host and use mTLS certificates. Typically, port 2375 is an unencrypted and unsecured port that should never be used, and 2376 is configured with at least TLS, and hopefully mTLS (without the mutual part to verify clients, security is non existent). For more details on what it takes to configure this, see https://docs.docker.com/engine/security/https/
All that being said, docker with mTLS is roughly the same security as SSH with only key pair logins allowed. Considering the access it grants to the host, I personally don't leave either of these exposed to the internet despite being fairly secure. When possible, I use IP whitelists, VPNs, or other measures to limit access. But many may feel relatively safe leaving these ports exposed.
Unless you are using certificates to secure the socket, it's prone for attacks. See more info here.
In the past, some of my test cloud instances were compromised and turned into bitcoin miners. In one instance, since there were keys available on that particular host, the attacker could use those keys to create new cloud instances.

cassandra on azure, how to configure security groups

i just installed the datastax cluster of cassandra.
i have a question regarding the security groups and how to limit access.
currently, there are no security groups to the vnet and to all vms. so everyone can connect to the cluster.
the problem starts when i try to set a security group on the subnet. this is because the http communication of the cassandra nodes is (i think) used with the public ip and not the internal ip. i get an error in the opscenter that the http connection is down.
the question is how can i restrict the access to the cluster (for a specific ip), but provide access to all the cassandra nodes to work.
Its good practice to exercise security when running inside any public cloud whether its Azure, GCE, or AWS etc. Enabling internode SSL is a very good idea because this will secure the internode gossip communications. Then you should also introduce internal authentication (at the very least) so you require a user/password to login to cqlsh. I would also recommend using client to node SSL, 1-way should be sufficient for most cases.
I'm not so sure about Azure but I know with AWS and GCE the instances will only have a local internally routed IP (usually in the 10.0.0.0/8 private range) and the public IP will be via NAT. You would normally use the public IP as the broadcast_address especially if you are running across different availability zones where the internal IP does not route. You may also be running a client application which might connect via the public ip so you'd want to set the broadcast_rpc_address as public too. Both of these are found in the cassandra.yaml. The listen_address and rpc_address are both IPs that the node will bind to so they have to be locally available (i.e. you cant bind a process to a IP thats not configured on an interface on the node).
Summary
Use internode SSL
Use client to node SSL
Use internal authentication at the very minimum (Ldap and Kerberos are also supported)
Useful docs
I highly recommend following the documentation here. Introducing security can be a bit tricky if you hit snags (whatever the application). I always start of making sure the cluster is running ok with no security in place then introduce one thing at a time, then test, verify and then introduce the next thing. Dont configure everything at once!
Firewall ports
Client to node SSL - note require_client_auth: true should be false for 1-way.
Node to node SSL
Preparing SSL certificates
Unified authentication (internal, LDAP, Kerberos etc)
Note when generating SSL keys and certs typically you'd just generate the one pair and use it across all the nodes when you have node to node SSL. Otherwise if you introduce a new node you'll have to import the new cert into all nodes, which isn't really scalable. In my experience working with organisations using large clusters this is how they manage things. Also client applications may well use just same key or a different one at least.
Further info / reading
2-way SSL is supported, but its not as common as 1-way. This is typically a bit more complex and switched on with the require_client_auth: true in the cassandra.yaml
If you're using OpsCenter for SSL, the docs (below) will cover things. Note that essentially its in two places:
SSL between opscenter and the agents and the cluster (same as client to node SSL above)
SSL between OpsCenter and the Agents
OpsCenter SSL configuration
I hope this helps you towards achieving what you need to!

Trouble accessing Kubernetes endpoints

I'm bringing up Spark on Kubernetes according to this example: https://github.com/kubernetes/kubernetes/tree/master/examples/spark
For some reason, I'm having problems getting the master to listen on :7077 for connections from worker nodes. It appears that connections aren't being proxied down from the service. If I bring the service up, then bring the master controller up with the $SPARK_MASTER_IP set to spark-master, it correctly resolves to the service IP but cannot bind the port. If I set the ip to localhost instead, it binds a local port and comes up -- since services should forward socket connections down to the pod endpoint this should be fine, so we move on.
Now I bring up workers. They attempt to connect to the service IP on :7077 and cannot. It seems as if connections to the service aren't making it down to the endpoint. Except...
I also have a webui service configured as in the example. If I connect to it with kubectl --proxy I can get down to the web service that's served on :8080 from spark-master, by hitting it through the webui service. Yet the nearly identically-configured spark-master service on port 7077 gives no love. If I configure the master to bind a local IP, it comes up but doesn't get connections from the service. If I configure it to bind through the service, the bind fails and it can't come up at all.
I'm running out of ideas as to why this might be happening -- any assistance is appreciated. I'm happy to furnish more debugging info on request.
I'm sorry, the Spark example was broken, in multiple ways.
The issue:
https://github.com/kubernetes/kubernetes/issues/17463
It now works, as of 2/25/2016, and is passing our continuous testing, at least at HEAD (and the next Kubernetes 1.2 release).
Note that DNS is required, though it is set up by default in a number of cloud provider implementations, including GCE and GKE.

Redis Configuration, need to allow remote connections but need security

I need my about two EC2 instances which need to connection to an outside redis server. The redis conf is binded to 0.0.0.0 to allow this. Is there some sort of a password/auth system for redis connections? I need to way to allow my servers to connect to remote redis but block everyone else.
I know I can do this with iptables by whitelisting only those EC2 ip addresses for port 6379 but I was wondering if there was a proper way to do this.
Redis sports a very basic form of authentication via password protection. To enable it, you'll need to add/uncomment the requirepass directive in your configuration file and have your clients authenticate with the AUTH command.
Another approach would be to use an extra layer of security such as a secure proxy. Here's an howto: http://redislabs.com/blog/using-stunnel-to-secure-redis.

Resources