Various Kubernetes security recommendations tell you to avoid SSH into containers and ask to use kubectl instead. The prime reason quoted is the possibility of escaping to the underlying host resources via SSH into containers. So, I have following specific queries:
Which features of kubectl prevent you to access host resources and why ssh has more risk of accessing host resources as compared to kubectl? How kubectl is more secure?
Can SSH skip the Pod Security policies and access/mount paths on the underlying host which are restricted in pod security policy?
If SSH into containers is unavoidable, how to secure it in the best possible way?
If the reason is "you can escape via one and not the other", then I think it comes from somebody who don't understand the security mechanisms involved. There are other reasons to prefer kubectl exec over SSH, such as audit logging integrated with everything else Kubernetes, and easy access revocation, but they are possible to get with SSH too. It's just more work
kubectl runs client-side. If there were features in it that would prevent you from escaping, you could just patch them out.
No, those are on the pod and handled by the underlying kernel. SSH would only get you a shell in the container, just like kubectl exec would.
Use public-key authentication, make sure to have a strategy for ensuring your software in the container is up-to-date. Think about how you're going to manage the authorized_keys file and revocation of compromised SSH keys there. Consider whether you should lock down access to the port SSH is running on with firewall rules.
Only because you have to run an ssh server in your container; thus an extra process running in your container, and have to manage the keys, is enough reason to not want to SSH into a container.
So, that's one drawback. Another one will go with the use case, and it's a risk. Why would you want to SSH into a container? One reason I see is because you want to do it from an external host (without kubectl installed and authenticated against api-server). So you have to expose an endpoint to outside world, or at least to your network.
Related
I apologize for naive question. I am using my personal docker registry. I need to disable global docker registry. How to list the configure remote registries and disable one of them?
I searched on google but did not find any solution.
Docker daemon can connect with any docker registry in the world, if the registry has a valid HTTPS certificate. There is no "list", just like there is no list of like all web pages in the world.
disable one of them?
Setup a firewall, remove the dns names, setup a https proxy that filters the traffic, etc. How to block a Docker registry?
I'm experimenting with Docker Machine, trying to set it up on an already existing machine using docker-machine create --driver generic. I noticed that it reconfigures the firewall so that port 2376 is open to the Internet. Does it also set up proper authentication, or is there a risk that I'm exposing root access to this machine as a side effect?
By default, docker-machine configures mutual TLS (mTLS) to both encrypt the communication, and verify the client certificate to limit access. From the docker-machine documentation:
As part of the process of creation, Docker Machine installs Docker and configures it with some sensible defaults. For instance, it allows connection from the outside world over TCP with TLS-based encryption and defaults to AUFS as the storage driver when available.
You should see environment variables configured by docker-machine for the DOCKER_HOST and DOCKER_TLS_VERIFY to point to a remote host and use mTLS certificates. Typically, port 2375 is an unencrypted and unsecured port that should never be used, and 2376 is configured with at least TLS, and hopefully mTLS (without the mutual part to verify clients, security is non existent). For more details on what it takes to configure this, see https://docs.docker.com/engine/security/https/
All that being said, docker with mTLS is roughly the same security as SSH with only key pair logins allowed. Considering the access it grants to the host, I personally don't leave either of these exposed to the internet despite being fairly secure. When possible, I use IP whitelists, VPNs, or other measures to limit access. But many may feel relatively safe leaving these ports exposed.
Unless you are using certificates to secure the socket, it's prone for attacks. See more info here.
In the past, some of my test cloud instances were compromised and turned into bitcoin miners. In one instance, since there were keys available on that particular host, the attacker could use those keys to create new cloud instances.
First of all, I am not very expert of K8s, I understand some of the concepts and made already my hands dirty in the configurations.
I correctly set up the cluster configured by my company but I have this issue
I am working on a cluster with 2 pods, ingress rules are correctly configured for www.my-app.com and dashboard.my-app.com.
Both pods runs on the same VM.
If I enter in the dashboard pod (kubectl exec -it $POD bash) and try to curl http://www.my-app.com I land on the dashboard pod again (the same happens all the way around, from www to dashboard).
I have to use http://www-svc.default.svc.cluster.local and http://dashboard-svc.default.svc.cluster.local to land on the correct pods but this is a problem (links generated by the other app will contain internal k8s host, instead of the "public url").
Is there a way to configure routing so I can access pods with their "public" hostnames, from the pods themselves?
So what should happen when you curl is the external DNS record (www.my-app.com in this case) will resolve to your external IP address, usually a load balancer that then sends traffic to a kubernetes service. That service then should send traffic to the appropriate pod. It would seem that you have a misconfigured service. Make sure your service has an external IP that is different between dashboard and www. To see this a simple kubectl get svc should suffice. My guess is that the external IP is wrong, or the service is pointing to the wrong podm which you can see with a kubectl describe svc <name of service>.
If a docker enabled VM is restarted, e.g. due to Azure patching the VM or for whatever reason, the node can get a new IP address (VirtualBox can cause this, and Azure too)
Which in turn results in the cert no longer being valid and Docker fails to start on that machine.
If I use Docker Swarm, the result is that the restarted node will be stuck in status Pending indefinitely.
If I then do a docker-machine regenerate-certs mymachine then it starts working again.
How should I reason around this?
I guess there is no way around having nodes being restarted, so how do you deal with this?
Regarding Azure you can ensure your VM keeps its public IP address after restart by using "Reserved IP" addresses. Please note using reserved IPs on Azure (as with other cloud providers) may incur additional charges. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-reserved-public-ip/
Another way to handle this is using discovery. Swarm offers a discovery mechanism which support etcd, consul and zookeeper. Find more details here:
https://docs.docker.com/swarm/discovery/
I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as kube-proxy and kubelet to the master's kube-apiserver. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs.
You can set the flag --insecure-bind-address=0.0.0.0 when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost).