Network-policy file to allow traffic from only certain IP adress - firewall

How to write network policy files to allow traffic to access the application from only few IP address (eg: 127.18.12.1, 127.19.12.3 ). I have referred the file https://github.com/ahmetb/kubernetes-network-policy-recipes but didn't find satisfactory answer. It would be great if anyone help me to write the network-policy file. I also referred Kubernetes official document for network-policy.
My sample code
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: example
spec:
podSelector:
matchLabels:
app: couchdb
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 127.18.12.1/16
ports:
- protocol: TCP
port: 8080

If you donot want to edit networkpolicy and restrict traffic from ingress for particular domain you can edit ingress with annotation liked :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: restricted-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/from-to-www-redirect: "True"
nginx.ingress.kubernetes.io/force-ssl-redirect: "True"
nginx.ingress.kubernetes.io/whitelist-source-range: "00.0.0.0, 142.12.85.524"
for network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-policy
namespace: default
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

Related

Azure Kubernetes Ingress : 502 Bad Gateway while using Path inside ingress configuration

I am facing the 502 Bad gateway issue in my Application Gateway.
I am using Azure Kubernetes Service to deploy my cluster which is connected to Ingress Application Gateway.
Configuration Files:
kube-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
namespace: en02
labels:
app: myApp
spec:
selector:
matchLabels:
app: myApp
replicas: 1
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: somecr.azurecr.io/myApp:1.0.0.30
resources:
limits:
memory: "64Mi"
cpu: "100m"
ports:
- containerPort: 5100
env:
- name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
value: "Microsoft.AspNetCore.ApplicationInsights.HostingStartup"
- name: "ApplicationInsights__ConnectionString"
value: "myKey"
---
apiVersion: v1
kind: Service
metadata:
namespace: en02
name: myApp
spec:
selector:
app: myApp
ports:
- port: 30153
targetPort: 5100
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: en02
name: etopia
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: "/api/home"
spec:
rules:
- http:
paths:
- path: /myApp/
backend:
service:
name: myApp
port:
number: 30153
pathType: Exact
Result of
kubectl describe ingress -n en02
Name: ingress
Labels: <none>
Namespace: en02
Address: public-ip
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/myApp/ myApp:30153 (10.0.0.106:5100)
Annotations: appgw.ingress.kubernetes.io/health-probe-path: /api/home
kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
I am getting expected results from 10.0.0.106:5100/api/home and Application Gateway health status is 200.
No matter what I do, I always get Bad Gateway error, I was able to access a sample app on port 80 (where the ingress path was /) but if I specify anything in ingress path (/cashify/) it always give me bad gateway.
I tried adding readinessProbe to container but it doesn't work (However I am already getting 200 under application gateway health status).
Please help.
Please check if below can be worked around.
Please try to update deployment yaml to use wild card path specification to access apis with different paths.
deployment.yml
apiVersion: extensions/v1beta1
kind: Ingress
....
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: xxx
http:
paths:
- path: /api/* #wild card path
backend:
serviceName: apiservice
servicePort: 80
- backend:
.....
servicePort: 80
Note from MS docs: If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a
different status code as Healthy, configure a custom probe and
associate it with the HTTP settings.
As you said you have defined readinessProbe , please check if path of those probes is correct.
Check same with 1. livenessProbe 2. readinessProbe
Also please note that readinessProbe and livenessProbe are supported when configured with httpGet.
References:
bad request - path based routing · kubernetes-ingress · GitHub
application-gateway-troubleshooting-502

Slow network on AKS ith Egress policies

I've setup a Kube cluster with AKS, Calico Policy and Azure CNI plugin.
Some of my pods need to connect to external services and I want to setup Egress rules to limit traffic on the pods. Ingress is doing just fine but for whatever reason when I add my Egress rules it works but it seems to be painfully slow.
For example : without Egress my logs telling me my pod is connected to the DB are instant. With egress rules up, it can takes 3 to 10 minutes to connect to my DB.
Before connecting to the DB my pod is getting values from a KeyVault. (I don't use Vault mounting because the variables fetched in the vault are dynamic depending on a certain configuration). So I'm not sure what takes times here.
I've done a tcpdump and ifconfig but I can't see any dropped packets.
I haven't set ports on CIDR yet because I wanted to make sure it was not a matter of port mapping first.
The pod runs on port 3000 and here are my configs for the network policies :
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: client2
name: default-deny
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: client2-network-policy
namespace: client2
spec:
podSelector:
matchLabels:
io.kompose.service: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-basic
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
ports:
- protocol: TCP
port: 3000
- from:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
- from: # Keyvault ips fetched from dns lookup
- ipBlock:
cidr: 1.2.3.4/32
- from:
- ipBlock:
cidr: 3.4.5.6/32
- from:
- ipBlock:
cidr: 34.5.4.6/32
- from:
- ipBlock:
cidr: 2.5.6.2/32 #db shard1
- from:
- ipBlock:
cidr: 2.4.5.2/32 #db shard2
- from:
- ipBlock:
cidr: 1.2.4.3/32 #db shard3
egress:
- to:
- namespaceSelector:
matchLabels:
name: ingress-basic
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
- to:
- ipBlock:
cidr: 1.3.8.0/24 #unknown azure service i saw in tcpdump
- to: # Keyvault ips fetched from dns lookup
- ipBlock:
cidr: 1.2.3.4/32
- to:
- ipBlock:
cidr: 3.4.5.6/32
- to:
- ipBlock:
cidr: 34.5.4.6/32
- to:
- ipBlock:
cidr: 2.5.6.2/32 #db shard1
- to:
- ipBlock:
cidr: 2.4.5.2/32 #db shard2
- to:
- ipBlock:
cidr: 1.2.4.3/32 #db shard3
I don't know where to look at to debug the slowness, any ideas or tips ? Thanks !

Unable to access Ingress service using hostname

I created an Ingress Service as below and I am able to get response using the IP (retrieved using the kubectl get ingress command).
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: msanajyv/k8s_api
ports:
- containerPort: 80
Service File
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-clusterip-service
labels:
app: app1-nginx
spec:
type: ClusterIP
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
Ingress File
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxapp1-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
With above YAML file I am able to access the service using the ip address i.e., http://xx.xx.xx.xx/weatherforecast .
But I thought of accessing the service using a Domain name instead of IP. So I created a DNS in my azure portal and added a Record set as below.
Also I changed my Ingress file as below.
...
rules:
- host: app1.msvcloud.io
http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
When I access using the host name (http://app1.msvcloud.io/weatherforecast), the host is not getting resolved. kindly let me know what I am missing.
By create a record in your private DNS zone, you can only resolve the name (app1.msvcloud.io) within your virtual network. This means it will work if you remote into a VM within the VNET and access from there. But it will not work if you try the same outside the VNET. If you want the name to be resolvable on the world wide web, you need to buy the domain name and register in Azure DNS.
The records contained in a private DNS zone aren't resolvable from the
Internet. DNS resolution against a private DNS zone works only from
virtual networks that are linked to it.
~What is a private Azure DNS zone.

How to configure Azure Application Gateway Ingress Controller (AGIC) yaml

I need help in AGIC configuration. I am using Loadbalancer service for my existing AKS cluster and below is the sample yaml file that works and I can access application using LB public IP :
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
namespace: asp-test
labels:
app: asp-frontend
spec:
selector:
matchLabels:
app: asp-frontend
template:
metadata:
labels:
app: asp-frontend
spec:
containers:
- name: aspnetapp
image: "mcr.microsoft.com/dotnet/core/samples:aspnetapp"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aspnetapp-load
namespace: asp-test
labels:
app: asp-frontend
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: mc_asp-onef-dev_rg_asp_aks_eastus2
spec:
loadBalancerIP: 10.10.10.10
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: asp-frontend
==================
Now I would like to use AGIC instead of LB and I am just adding below section in the file but I get "502 bad gateway" error. My AKS and AG vnets are peered. I don't have NSG to block connection. Deployment is successful and pods are running. I can access it using LB IP but not using AGIC.
I have tried editing this file and use normal AKS service instead of LB but I still get same error.
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aspnetapp
namespace: asp-test
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: aspnetapp-load
servicePort: 80

Can you have both a HTTP port and a TCP port exposed in AKS using Application Gateway?

Setup:
Azure Kubernetes Service
Azure Application Gateway
We have kubernetes cluster in Azure which uses Application Gateway for managing network trafic. We are using appgw over Load Balancer because we need to handle trafic at layer 7, hence path-based http rules. We use kubernetes ingress controller for configuring appgw. See config below.
Now I want a service that both accept requests on HTTP (layer 7) and TCP (layer 4).
How do I do that? The exposed port should not be public on the big internet, but public on the azure network. Do I need to add another Ingress Controller that is not configured to use appgw?
This is what I want to accomplish:
This is the config for the ingress controller using appgw:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service1
labels:
app: service1
annotations:
appgw.ingress.kubernetes.io/backend-path-prefix: /
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /service1/*
backend:
serviceName: service1
servicePort: http
Current setup:
Generally you have two straightforward options.
Use direct pod IP or Headless ClusterIP Service.
I assume your AKS cluster employs either Azure CNI or Calico as a networking fabric.
In both cases your Pods get routable ips in AKS subnet.
https://learn.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking
Thus, you can make them accessible directly across your VNet.
Alternatively, you could use Service of type Internal Load Balancer.
You can build Internal LB thru appropriate annotations.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
When you view the service details, the IP address of the internal load balancer is shown in the EXTERNAL-IP column. In this context, External is in relation to the external interface of the load balancer, not that it receives a public, external IP address.
https://learn.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer
If needed, you can assign a predefined IP address to your LB and / or put it into a different subnet across your VNet or even into a private subnet, use VNet peering etc.
Eventually you can make it routable from wherever you need.
The exposed port should not be public, but public in the kubernetes cluster.
I assume that you mean that your application should expose a port for clients within the Kubernetes cluster. You don't have to do any special in Kubernetes for Pods to do this, they can accept TCP connection to any port. But you may want to create a Service of type: ClusterIP for this, so it will be easer for clients.
Nothing more than that should be needed.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Here is an example snippet where I have created a service
apiVersion: v1
kind: Service
metadata:
name: cafe-app-service
labels:
application: cafe-app-service
spec:
ports:
port: 80
protocol: HTTP
targetPort: 8080
name: coffee-port
port: 8081
protocol: TCP
targetPort: 8081
name: tea-port
selector:
application: cafe-app
You can reference your service in your ingress that you have created
I went with having two Services for same pod; one for the HTTP handled by an Ingress (appgw) and one for TCP using an internal Azure Load balancer.
This is my config that I ended up using:
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-serviceA-cm
namespace: default
data:
8200: "default/serviceA:8200"
TCP service
apiVersion: v1
kind: Service
metadata:
name: tcp-serviceA
namespace: default
labels:
app.kubernetes.io/name: serviceA
app.kubernetes.io/part-of: serviceA
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: subnetA
spec:
type: LoadBalancer
ports:
- name: tcp
port: 8200
targetPort: 8200
protocol: TCP
selector:
app: serviceA
release: serviceA
HTTP Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: serviceA
labels:
app: serviceA
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /serviceA/*
backend:
serviceName: serviceA
servicePort: http
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
replicas: 1
selector:
matchLabels:
app: serviceA
release: serviceA
template:
metadata:
labels:
app: serviceA
release: serviceA
spec:
containers:
- name: serviceA
image: "serviceA:latest"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
HTTP Service
apiVersion: v1
kind: Service
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: serviceA
release: serviceA

Resources