How to get ssl on a kubernetes application? - security

I have a simple meteor app deployed on kubernetes. I associated an external IP address with the server, so that it's accessible from within the cluster. Now, I am up to exposing it to the internet and securing it (using HTTPS protocol). Can anyone give simple instructions for this section?

In my opinion kube-lego is the best solution for GKE. See why:
Uses Let's Encrypt as a CA
Fully automated enrollment and renewals
Minimal configuration in a single ConfigMap object
Works with nginx-ingress-controller (see example)
Works with GKE's HTTP Load Balancer (see example)
Multiple domains fully supported, including virtual hosting multiple https sites on one IP (with nginx-ingress-controller's SNI support)
Example configuration (that's it!):
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-lego
namespace: kube-lego
data:
lego.email: "your#email"
lego.url: "https://acme-v01.api.letsencrypt.org/directory"
Example Ingress (you can create more of these):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: site1
annotations:
# remove next line if not using nginx-ingress-controller
kubernetes.io/ingress.class: "nginx"
# next line enable kube-lego for this Ingress
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- site1.com
- www.site1.com
- site2.com
- www.site2.com
secretName: site12-tls
rules:
...

There are several ways to setup a ssl endpoint, but your solution needs to solve 2 issues: First, you need to get a valid cert and key. Second, you would need to setup a ssl endpoint in your infrastructure.
Have a look at k8s ingress controller. You can provide an ingress controller with a certificate/key secret from the k8s secret store to setup a ssl endpoint. Of course, this requires you to already have a valid certificate and key.
You could have a look at k8s specific solutions for issuing and using certificates like the Kubernetes Letsencrypt Controller, but I have never used them and cannot say how well they work.
Here are some general ideas to issue and use ssl certificates:
1. Getting a valid ssl certificate and key
AWS
If you are running on AWS, the easiest way I can think of is by setting up an ELB, which can issue the ssl cert automatically for you.
LetsEncrypt
You could also have a look at LetsEncrypt to issue free certificates for your domain. Nice thing about it is that you can automate your cert issuing process.
CA
Of course, you could always go the old-fashion way and issue a certificate from a provider that you trust.
2. Setting up the ssl endpoint
AWS
Again, if you have an ELB then it already acts as an endpoint and you are done. Of course your client <-> ELB connection is encrypted, but ELB <-> k8s-cluster is unencrypted.
k8s ingress controller
As mentioned above, depending on the k8s version you use you could also setup a TLS ingress controller.
k8s proxy service
Another option is to setup a service inside your k8s cluster, which terminates the ssl connection and proxies the traffic to your meteor application unencrypted.
You could use nginx as a proxy for this. In this case I suggest you store your certificate's key inside k8s secret store and mount it inside the nginx container. NEVER ship a container which has secrets such as certificate keys stored inside! Of course you still somehow need to send your encrypted traffic to a k8s node - again, there several ways to achieve this... Easiest would be to modify your DNS entry to point to the k8s nodes, but ideally you would use a TCP LB.

Related

Kubernetes + Socket.io: Pod client -> LoadBalancer service SSL issues

I have a socket.io-based node.js deployment on my Kubernetes cluster with a LoadBalancer-type service through Digital Ocean. The service uses SSL termination using a certificate uploaded to DO.
I've written a pod which acts as a health check to ensure that clients are still able to connect. This pod is node.js using the socket.io-client package, and it connects via the public domain name for the service. When I run the container locally, it connects just fine, but when I run the container as a pod in the same cluster as the service, the health check can't connect. When I shell into the pod, or any pod really, and try wget my-socket.domain.com, I get an SSL handshake error "wrong version number".
Any idea why a client connection from outside the cluster works, a client connection out of the cluster to a normal server works, but a client connection from a pod in the cluster to the public domain name of the service doesn't work?
You have to set up Ingress Controller to route traffic from a Load-Balancer to a Service.
The flow of traffic looks like this:
INTERNET -> LoadBalancer -> [ Ingress Controller -> Service]
If you want to use SSL:
You can provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate.
You can deploy an ingress controller like nginx using following instruction: ingress-controller.
Turns out, the issue is with how kube-proxy handles LoadBalancer-type services and requests to it from inside the cluster. Turns out, when the service is created, it adds iptables entries that causes requests inside the cluster skip the load balancer completely, which becomes an issue when the load balancer also handles SSL termination. There is a workaround, which is to add a loadbalancer-hostname annotation which forces all connections to use the load balancer. AWS tends not to have this problem because they automatically apply the workaround to their service configurations, but Digital Ocean does not.
Here are some more details:
https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md

How to set information in Kubernetes Network Policy dynamically?

I have a few questions regarding Kubernetes: How to secure Kubernetes cluster?
My plan is to develop an application that secures Kubernetes cluster by default. I have written and tested a few Network Policy successfully.
As a second step I want to set these information dynamically in my application based on cloud provider and so one.
1.) I want to block access the host network as well as the meta data services (my cluster runs on AWS):
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.250.0.0/16 # host network
- 169.254.169.254/32 # metadata service
Does anyone know how I can access the host network dynamically?
I found an issue that says that you must use the Meta Data Service: https://github.com/kubernetes/kubernetes/issues/24657
Does anyone know how I can find out on which cloud provider I am currently running?
Based on that information, I want to set the meta data service IP.
2.) I want to block access to the "kube-system" namespace:
egress:
- to:
- podSelector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- kube-system
Does anyone know how I can enforce the actual denied access?
As far as I understood, the key labled "namespace" is just a name that I choosed. How does Kubernetes know that I actually mean the namespace and nothing else?
3.) I want to block Internet access:
spec:
podSelector: {}
policyTypes:
- Egress
Does anyone know, if something like the DNS server in the DMZ zone is still reachable?
4.) I want to block communication to pods with a different namespace:
egress:
- to:
- namespaceSelector:
matchLabels:
project: default
Here, I developed a controller that set the namespace dynamically.
Your ideas are good in terms of a least-privilege policy but the implementation is problematic due to the following reasons.
The logic you are trying to achieve it beyond the capabilities of Kubernetes network policies. It is very difficult to combine multiple block and allow policies in k8s without them conflicting with each other. For example, your first snippet allows access to any IP outside of the cluster and then your 3rd question is about blocking access to the internet - these two policies can't work simultaneously.
You shouldn't block access to the kube-system namespace because that's where the k8s DNS service is deployed and blocking access to it will prevent all communications in the cluster.
To answer your 1st question specifically:
How I can access the host network dynamically?
The cluster subnet is defined when you deploy it on AWS - you should store it during creation and inject it to your policies. Alternatively, you may be able to get it by calling an AWS API.
You can also get the cluster node IPs from Kubernetes: kubectl get nodes -o wide
How I can find out on which cloud provider I am currently running?
Kubernetes doesn't know which platform it is running on, but you can guess it based on the node name prefix, for example: aks-nodepool1-18370774-0 or gke-...
Your 4th point about blocking access between namespaces is good but it would be better to do it with an ingress policy like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
For more details, I recommend this blog post that explains the complexities of k8s network policies: https://medium.com/#reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d
As Mark pointed out, NP may not be able to address all your use cases. You might want to check out the Open Policy Agent project and there specifically the Gatekeeper tool, which could be utilized, to at least in part cover some of your needs.

Https certificates and Kubernetes (EKS)

I would like to secure my web application running on Kubernetes (EKS). All the nodes attached to the cluster are running on private subnets.
I have one front-end service and a dozen back-end services.
The front-end service is a pod running a container which is running on port 80. It is configured to be attached to an ELB which is only accepting traffic from 443 with an https certificate.
apiVersion: v1
kind: Service
metadata:
name: service_name
labels:
app: service_name
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
ports:
- port: 443 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
type: LoadBalancer
The back-end services are pods running containers also running on port 80. None of them have been configured to be accessible from outside the cluster. Back-end services talk to each other by pointing to http://service_name (NOT https) as I configured them with this template:
apiVersion: v1
kind: Service
metadata:
name: service_name
spec:
ports:
- port: 80 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
It all works but is it sufficient?
Should the front-end/back-end containers use certificate/443 too with a wildcard https certificate? Should this configuration be done inside the container or on the services' configurations?
I have done quite a bit of investigation now and here is what I came down to.
All my EKS EC2 instances are running on the private subnets which means they are not accessible from outside. Yes, by default Kubernetes does not encrypt traffic between pods which means that a hacker who gained access to my VPC (could be a rogue AWS engineer, someone who manages to physically access AWS data centers, someone who managed to access my AWS account...) will be able to sniff the network traffic. At the same time, I feel that in that instance the hacker will have access to much more! If he has access to my AWS account, he can download the https certificate himself for instance. If he manages to walk into an (high security) AWS data center and finds my server - it's good to compare the risk he has to take against the value of your data. If your data includes credit card/payments or sensitive personal data (date of birth, health details...), SSL encryption is a must.
Anyway, to secure pods traffic, there are 2 options.
Update all the pod source code and add the certificate there. It requires a lot of maintenance if you are running many pods and certificates expire every other year..
Add an extra 'network layer' like https://istio.io/. This will add complexity to your cluster and in the case of EKS, support from AWS will be 'best effort'. Ideally, you would pay for Istio support.
For the load balancer, I decided to add an ingress to the cluster (Ngnix, Traefik...) and set it up with https. That's critical as the ELB sits on the public subnets.

Is there a way to serve external URLs from nginx-kubernetes ingress?

Setup: Kubernetes cluster on AKS with nginx-kubernetes ingress. Azure Application Gateway routing domain with SSL certificate to nginx-kubernetes.
No problems serving everything in Kubernetes.
Now I moved static content to Azure Blob Storage. There's an option to use a custom domain, which works fine, but does not allow to use a custom SSL certificate. The only possible way is to set up a CDN and use the Verizon plan to use custom SSL certificates.
I'd prefer to keep all the routing in the ingress configuration, since some subroutes are directed to different Kubernetes services. Is there a way to mask and rewrite a path to the external blob storage url in nginx-kubernetes? Or is there any other available option that proxies an external url through ingress?
I don't mind having direct blob storage URLs for resources, but the main entry point should use the custom domain.
Not exactly the answer to the question, but the answer to the problem. Unfortunately this isn't documented very well. The solution is to create a service with a type of "ExternalName". According to https://akomljen.com/kubernetes-tips-part-1/ the service should look like this:
kind: Service
apiVersion: v1
metadata:
name: external-service
namespace: default
spec:
type: ExternalName
externalName: full.qualified.domain.name
I just tried it and it works like a charm.

Configuring HTTPS for an internal IP on Azure Kubernetes

I have deployed a Kubernetes cluster to a custom virtual network on Azure using acs-engine. There is an ASP.NET Core 2.0 Kestrel app running on the agent VMs and the app is accessed over VPN through a Service of the Azure internal load balancer type. Now I would like to enable HTTPS on the service. I have already obtained a domain name and a certificate but have no idea how to proceed. Apparently configuring Kestrel to use HTTPS and copying the certificate to each container is not the way to go.
I have checked out tutorials such as ingress on k8s using acs and configure Nginx Ingress Controller for TLS termination on k8s on Azure but both of them end up exposing a public external IP and I want to keep the IP internal and not accessible from the internet. Is this possible? Can it be done without ingresses and their controllers?
While for some reason I still can't access the app through the ingress I was able to create an internal ingress service with the IP I want with the following configuration:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: nginx-ingress-svc
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 443
loadBalancerIP: 130.10.1.9
selector:
k8s-app: nginx-ingress-controller
The tutorial you linked is a bit outdated, at least the instructions have you go to a 'examples' folder in the GitHub repo they link but that doesn't exist. Anyhow, a normal nginx ingress controller consists of several parts: the nginx deployment, the service that exposes it and the default backed parts. You need to look at the yamls they ask you to deploy, look for the second part of what I listed - the ingress service - and change type from LoadBalancer to ClusterIP (or delete type altogether since ClusterIP is the default)

Resources