How to set information in Kubernetes Network Policy dynamically? - security

I have a few questions regarding Kubernetes: How to secure Kubernetes cluster?
My plan is to develop an application that secures Kubernetes cluster by default. I have written and tested a few Network Policy successfully.
As a second step I want to set these information dynamically in my application based on cloud provider and so one.
1.) I want to block access the host network as well as the meta data services (my cluster runs on AWS):
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.250.0.0/16 # host network
- 169.254.169.254/32 # metadata service
Does anyone know how I can access the host network dynamically?
I found an issue that says that you must use the Meta Data Service: https://github.com/kubernetes/kubernetes/issues/24657
Does anyone know how I can find out on which cloud provider I am currently running?
Based on that information, I want to set the meta data service IP.
2.) I want to block access to the "kube-system" namespace:
egress:
- to:
- podSelector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- kube-system
Does anyone know how I can enforce the actual denied access?
As far as I understood, the key labled "namespace" is just a name that I choosed. How does Kubernetes know that I actually mean the namespace and nothing else?
3.) I want to block Internet access:
spec:
podSelector: {}
policyTypes:
- Egress
Does anyone know, if something like the DNS server in the DMZ zone is still reachable?
4.) I want to block communication to pods with a different namespace:
egress:
- to:
- namespaceSelector:
matchLabels:
project: default
Here, I developed a controller that set the namespace dynamically.

Your ideas are good in terms of a least-privilege policy but the implementation is problematic due to the following reasons.
The logic you are trying to achieve it beyond the capabilities of Kubernetes network policies. It is very difficult to combine multiple block and allow policies in k8s without them conflicting with each other. For example, your first snippet allows access to any IP outside of the cluster and then your 3rd question is about blocking access to the internet - these two policies can't work simultaneously.
You shouldn't block access to the kube-system namespace because that's where the k8s DNS service is deployed and blocking access to it will prevent all communications in the cluster.
To answer your 1st question specifically:
How I can access the host network dynamically?
The cluster subnet is defined when you deploy it on AWS - you should store it during creation and inject it to your policies. Alternatively, you may be able to get it by calling an AWS API.
You can also get the cluster node IPs from Kubernetes: kubectl get nodes -o wide
How I can find out on which cloud provider I am currently running?
Kubernetes doesn't know which platform it is running on, but you can guess it based on the node name prefix, for example: aks-nodepool1-18370774-0 or gke-...
Your 4th point about blocking access between namespaces is good but it would be better to do it with an ingress policy like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
For more details, I recommend this blog post that explains the complexities of k8s network policies: https://medium.com/#reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d

As Mark pointed out, NP may not be able to address all your use cases. You might want to check out the Open Policy Agent project and there specifically the Gatekeeper tool, which could be utilized, to at least in part cover some of your needs.

Related

accessing the minikube pods externally apart from local machine

I started the minikube process with docker driver, but I am accessing the data on my local machine only. I want to provide that url to client.
can any one help me regarding this issue. is it possible to access the minikube service externally on other machines apart from the local machine ?
my service file is as follows:
{
apiVersion: v1
kind: Service
metadata:
name: xxxx
spec:
selector:
app: xxxx
ports:
- port: 8080
targetPort: xxxx
type: LoadBalancer
}
Thank you
Important: minikube is not meant to be used in production. It's mainly an educational tool, used to teach user how kubernetes work in safe, controlled (and usually local) environment. Please, do not use it in production environments.
Important #2: Under any circumstances do not give access to your local machine to anyone - unless it's a server meant to be accessible from outside organization, and correctly hardened - be it your client or your friend. This is a huge security risk.
Now, off to the question:
Running:
minikube service --url <service name>
will give you an url with external IP, probably something in 192.168.0.0/16 range (if you are on local network). Then you need to create port forwarding rule on your router.
You can find more details here.

AKS LoadBalancer only sends traffic to one pod

I have a service configured as a LoadBalancer running on a five-node cluster running 1.9.11. The LoadBalancer sits in front of three pods running an ASP.NET Core web application (that in turn talks to a NATS message queue from which a listener retrieves messages and saves them to an Azure SQL database). All pods have resource request and limits set and everything is in a dedicated namespace.
I’m using a PowerShell script to cause the web application to generate a message for the NATS queue every 50 milliseconds. I can see from a couple of ways that the Loadbalancer is only sending traffic to one pod: firstly the CPU graphs in the k8s dashboard show no activity for two of the pods and secondly I’m tracing the Environment. MachineName from the web app right the way through to a field in the database and I can see that it’s only ever one MachineName. If I delete the pod that is receiving traffic a new pod immediately stars receiving traffic but it's still only that one pod out of three.
My understanding is that this isn’t how the LoadBalancer is intended to work, ie the LoadBalancer should send traffic to all pods. Is that right and if so any clues as to what I’m doing wrong? My service file is as follows:
apiVersion: v1
kind: Service
metadata:
name: megastore-web-service
spec:
selector:
app: megastore-web
ports:
- port: 80
type: LoadBalancer
It sounds to me like your load balancer is working correctly. When traffic comes into a LB the LB will automatically direct traffic to the first available node. The fact that you can shutdown your POD and traffic is rerouted is what would be expected.
This is a good article which helps explain how the LB works
https://blogs.msdn.microsoft.com/cie/2017/04/19/how-to-fix-load-balancer-not-working-in-round-robin-fashion-for-your-cloud-service/
To test this further, I would suggest you try opening a port on one of the PODs but not the others. Such as port 88 on POD2. Then connect using the loadbalancer:88 and see if the connection gets routed to the correct POD.

Kubernetes AKS bind domain

Question regarding AKS, each time release CD. The Kubernetes will give random IP Address to my services.
I would like to know how to bind the domain to the IP?
Can someone give me some link or article to read?
You have two options.
You can either deploy a Service with type=LoadBalancer which will provision a cloud load balancer. You can then point your DNS entry to that provisioned LoadBalancer with (for example) a CNAME.
More information on this can be found here
Your second option is to use an Ingress Controller with an Ingress Resource. This offers much finer grained access via url parameters. You'll probably need to deploy your ingress controller pod/service with a service Type=LoadBalancer though, to make it externally accessible.
Here's an article which explains how to do ingress on Azure with the nginx-ingress-controller

Random characters when describing kubernetes namespaces

I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT

How to get ssl on a kubernetes application?

I have a simple meteor app deployed on kubernetes. I associated an external IP address with the server, so that it's accessible from within the cluster. Now, I am up to exposing it to the internet and securing it (using HTTPS protocol). Can anyone give simple instructions for this section?
In my opinion kube-lego is the best solution for GKE. See why:
Uses Let's Encrypt as a CA
Fully automated enrollment and renewals
Minimal configuration in a single ConfigMap object
Works with nginx-ingress-controller (see example)
Works with GKE's HTTP Load Balancer (see example)
Multiple domains fully supported, including virtual hosting multiple https sites on one IP (with nginx-ingress-controller's SNI support)
Example configuration (that's it!):
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-lego
namespace: kube-lego
data:
lego.email: "your#email"
lego.url: "https://acme-v01.api.letsencrypt.org/directory"
Example Ingress (you can create more of these):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: site1
annotations:
# remove next line if not using nginx-ingress-controller
kubernetes.io/ingress.class: "nginx"
# next line enable kube-lego for this Ingress
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- site1.com
- www.site1.com
- site2.com
- www.site2.com
secretName: site12-tls
rules:
...
There are several ways to setup a ssl endpoint, but your solution needs to solve 2 issues: First, you need to get a valid cert and key. Second, you would need to setup a ssl endpoint in your infrastructure.
Have a look at k8s ingress controller. You can provide an ingress controller with a certificate/key secret from the k8s secret store to setup a ssl endpoint. Of course, this requires you to already have a valid certificate and key.
You could have a look at k8s specific solutions for issuing and using certificates like the Kubernetes Letsencrypt Controller, but I have never used them and cannot say how well they work.
Here are some general ideas to issue and use ssl certificates:
1. Getting a valid ssl certificate and key
AWS
If you are running on AWS, the easiest way I can think of is by setting up an ELB, which can issue the ssl cert automatically for you.
LetsEncrypt
You could also have a look at LetsEncrypt to issue free certificates for your domain. Nice thing about it is that you can automate your cert issuing process.
CA
Of course, you could always go the old-fashion way and issue a certificate from a provider that you trust.
2. Setting up the ssl endpoint
AWS
Again, if you have an ELB then it already acts as an endpoint and you are done. Of course your client <-> ELB connection is encrypted, but ELB <-> k8s-cluster is unencrypted.
k8s ingress controller
As mentioned above, depending on the k8s version you use you could also setup a TLS ingress controller.
k8s proxy service
Another option is to setup a service inside your k8s cluster, which terminates the ssl connection and proxies the traffic to your meteor application unencrypted.
You could use nginx as a proxy for this. In this case I suggest you store your certificate's key inside k8s secret store and mount it inside the nginx container. NEVER ship a container which has secrets such as certificate keys stored inside! Of course you still somehow need to send your encrypted traffic to a k8s node - again, there several ways to achieve this... Easiest would be to modify your DNS entry to point to the k8s nodes, but ideally you would use a TCP LB.

Resources