blacklist IP in kubernetes security policy - security

I read through the kubernetes network policy documentation and stumbled upon this statement:
What you can't do with network policies (at least, not yet)
The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add allow rules).
Is there a way around this limiting factor or any add on to kubernetes that allows for blacklisting IPs?

You can use 3rd party for this task.
Few examples:
https://docs.aws.amazon.com/eks/latest/userguide/restrict-service-external-ip.html
https://istio.io/v1.1/docs/tasks/policy-enforcement/denial-and-list/#ip-based-whitelists-or-blacklists
apiVersion: config.istio.io/v1alpha2
kind: listchecker
metadata:
name: whitelistip
spec:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.57.0.0/16"] # overrides provide a static list
blacklist: false
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: listentry
metadata:
name: sourceip
spec:
value: source.ip | ip("0.0.0.0")
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
spec:
match: source.labels["istio"] == "ingressgateway"
actions:
- handler: whitelistip.listchecker
instances:
- sourceip.listentry
---
With nginx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
#
# This is the relevant part
#
nginx.ingress.kubernetes.io/whitelist-source-range: 49.36.X.X/32
# depending on the ingress controller version the annotation
# above may need to be modified to remove the prefix nginx. i.e.
# ingress.kubernetes.io/whitelist-source-range: 49.36.X.X/32
spec:
rules:
- host: web.manitestdomain.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: web
port:
number: 80

Related

AKS with LetsEncrypt and multiple certs for different containers

I'm looking for any working samples of applying different certificates on AKS with Application Gateway as Ingress Controller.
I have Key Vault with a certificate that is used imported in ApGw/Ingress as sitecomcert and here is Ingress manifest:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: site-agic-ig
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/appgw-ssl-certificate: sitecomcert
appgw.ingress.kubernetes.io/ssl-redirect: "true"
appgw.ingress.kubernetes.io/request-timeout: "180"
appgw.ingress.kubernetes.io/cookie-based-affinity: "true"
spec:
rules:
- host: "site.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: site-svc
port:
number: 80
...
Everything works perfect here.
Now I have a second certificate in Key Vault for site2.com and this cert is already imported in Ap Gw as site2comcert and I have container that should serve requests coming to site2.com which point to Ap Gw Public IP.
So I'm about to add
- host: "site2.com" <--- How can I attach **site2comcert** cert?
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: site2-svc
port:
number: 80
but with this setup I receive Untrusted Connection warning in browser because sitecomcert is used. How to configure ApGw / Ingress in a way that allows to use site2comcert for site2.com host specified above?
You can have multiple ingress resource definitions (snipped for brevity):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: site-agic-ig
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/appgw-ssl-certificate: sitecomcert
spec:
rules:
- host: "site.com"
and
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: site-agic-ig-site2
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/appgw-ssl-certificate: site2comcert
spec:
rules:
- host: "site2.com"

Ingress Nginx external IP set not working

I'm trying to make Ingress use external IP i have created in Azure
First I have created an IP in the portal and added my AKS service as network contributor, then added it in the values file used by HELM
# -- List of IP addresses at which the controller services are available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: ["20.124.63.xxx"]
# -- Used by cloud providers to connect the resulting `LoadBalancer` to a pre-existing static IP according to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
loadBalancerIP: ""
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
But after deployment, my ingress gets two external IPs, and the one set by me does not work at all, only automatically generated works:
My config looks like this, so I think running this as loadbalancer is not exactly possible:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- host: xxx.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: aks-one
port:
number: 80
- host: xxx.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: aks-two
port:
number: 80
I would like to use static IP I have created to access my Ingress, what should I do to achieve that?
Exposing the Service of your ingress controller with your public ip can be done like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
Azure now will spin-up a LoadBalancer with your public IP.
The Ingress Controller then will route incoming traffic to your apps with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80

Azure Kubernetes Ingress : 502 Bad Gateway while using Path inside ingress configuration

I am facing the 502 Bad gateway issue in my Application Gateway.
I am using Azure Kubernetes Service to deploy my cluster which is connected to Ingress Application Gateway.
Configuration Files:
kube-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
namespace: en02
labels:
app: myApp
spec:
selector:
matchLabels:
app: myApp
replicas: 1
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: somecr.azurecr.io/myApp:1.0.0.30
resources:
limits:
memory: "64Mi"
cpu: "100m"
ports:
- containerPort: 5100
env:
- name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
value: "Microsoft.AspNetCore.ApplicationInsights.HostingStartup"
- name: "ApplicationInsights__ConnectionString"
value: "myKey"
---
apiVersion: v1
kind: Service
metadata:
namespace: en02
name: myApp
spec:
selector:
app: myApp
ports:
- port: 30153
targetPort: 5100
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: en02
name: etopia
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: "/api/home"
spec:
rules:
- http:
paths:
- path: /myApp/
backend:
service:
name: myApp
port:
number: 30153
pathType: Exact
Result of
kubectl describe ingress -n en02
Name: ingress
Labels: <none>
Namespace: en02
Address: public-ip
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/myApp/ myApp:30153 (10.0.0.106:5100)
Annotations: appgw.ingress.kubernetes.io/health-probe-path: /api/home
kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
I am getting expected results from 10.0.0.106:5100/api/home and Application Gateway health status is 200.
No matter what I do, I always get Bad Gateway error, I was able to access a sample app on port 80 (where the ingress path was /) but if I specify anything in ingress path (/cashify/) it always give me bad gateway.
I tried adding readinessProbe to container but it doesn't work (However I am already getting 200 under application gateway health status).
Please help.
Please check if below can be worked around.
Please try to update deployment yaml to use wild card path specification to access apis with different paths.
deployment.yml
apiVersion: extensions/v1beta1
kind: Ingress
....
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: xxx
http:
paths:
- path: /api/* #wild card path
backend:
serviceName: apiservice
servicePort: 80
- backend:
.....
servicePort: 80
Note from MS docs: If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a
different status code as Healthy, configure a custom probe and
associate it with the HTTP settings.
As you said you have defined readinessProbe , please check if path of those probes is correct.
Check same with 1. livenessProbe 2. readinessProbe
Also please note that readinessProbe and livenessProbe are supported when configured with httpGet.
References:
bad request - path based routing · kubernetes-ingress · GitHub
application-gateway-troubleshooting-502

Azure Application Gateway Ingress Controller not reaching Service(CLusterIP)

Here I have explained the scenario. I can reach a clusterIP using nginx ingress But I can't reach the same service using Azure Application Gateway Ingress. Bellow annotation is not helping me
appgw.ingress.kubernetes.io/rewrite-target: /
Any Idea?
Make sure you add below annotations to example-ingress.
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
You can see the full list and examples here.
You were using wrong annotation. I have updated your ingress with correct annotation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
rules:
- http:
paths:
- path: /apple/*
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
Checkout all the AGIC annotations here

Install two traefik ingress controller on same kubernetes Cluster

I have a situation, where I am planning to use two separate traefik ingress controller inside the Kubernetes cluster.
I have a few URLs which I want to be accessible through VPN only, and few which can be publicly accessible.
In the current architecture, I have one traefik-ingress controller, and two separate ALBs, one internal and one internet-facing, both pointing to traefik.
Let's say, I have a URL public.example.com and private.example.com. public.example.com is pointing to internet-facing ALB, and private.example.com is pointing to internal ALB. But what if someone get to know the pointing of public.example.com and points private.example.com to same pointing in his /etc/hosts, he will be able to access my private website.
To avoid this, I am planning to run two separate traefik-ingress-controller, one which will be serving only private URL and one public URL. Can this be done? Or is there any other way to avoid this
To deploy two separate traefik-ingress controller, to serve private and public traffic separately, I used kubernetes.ingressclass=traefik args.
This is what documentation has to say for kubernetes.ingressclass:
--kubernetes.ingressclass Value of kubernetes.io/ingress.class annotation to watch for
I created two deployment, having separate value for kubernetes.ingressclass.
One with kubernetes.ingressclass=traefik, which was behind a public ALB and kubernetes.ingressclass=traefik-internal, which was behind a private/internal ALB
For services, which I want to serve privately, I use the following annotations in ingress objects :
annotations:
kubernetes.io/ingress.class: traefik-internal
and for public
annotations:
kubernetes.io/ingress.class: traefik
My deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-internal-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-internal-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-internal-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-internal-ingress-lb
spec:
serviceAccountName: traefik-internal-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7
name: traefik-internal-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --accesslog=true
- --kubernetes.ingressclass=traefik-internal ##this makes it to watch only for ingress objects with annotaion "kubernetes.io/ingress.class: traefik-internal"
Hope this helps someone.
You can achieve this with a single Ingress controller inside the cluster but by creating various Ingress Kubernetes Objects.
For Private site:-
consider whitelist-source-range annotation in the ingress resource.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
For Public site:-
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
Multiple Træfik Deployments can run concurrently in the same cluster.For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.
For such cases, it is advisable to classify Ingress objects through a label and configure the labelSelector option per each Træfik Deployment accordingly. To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a traffic-type: internal label while objects designated for external traffic receive a traffic-type: external label. The label selectors on the Træfik Deployments would then be traffic-type=internal and traffic-type=external, respectively.

Resources