I've setup a Kube cluster with AKS, Calico Policy and Azure CNI plugin.
Some of my pods need to connect to external services and I want to setup Egress rules to limit traffic on the pods. Ingress is doing just fine but for whatever reason when I add my Egress rules it works but it seems to be painfully slow.
For example : without Egress my logs telling me my pod is connected to the DB are instant. With egress rules up, it can takes 3 to 10 minutes to connect to my DB.
Before connecting to the DB my pod is getting values from a KeyVault. (I don't use Vault mounting because the variables fetched in the vault are dynamic depending on a certain configuration). So I'm not sure what takes times here.
I've done a tcpdump and ifconfig but I can't see any dropped packets.
I haven't set ports on CIDR yet because I wanted to make sure it was not a matter of port mapping first.
The pod runs on port 3000 and here are my configs for the network policies :
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: client2
name: default-deny
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: client2-network-policy
namespace: client2
spec:
podSelector:
matchLabels:
io.kompose.service: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-basic
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
ports:
- protocol: TCP
port: 3000
- from:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
- from: # Keyvault ips fetched from dns lookup
- ipBlock:
cidr: 1.2.3.4/32
- from:
- ipBlock:
cidr: 3.4.5.6/32
- from:
- ipBlock:
cidr: 34.5.4.6/32
- from:
- ipBlock:
cidr: 2.5.6.2/32 #db shard1
- from:
- ipBlock:
cidr: 2.4.5.2/32 #db shard2
- from:
- ipBlock:
cidr: 1.2.4.3/32 #db shard3
egress:
- to:
- namespaceSelector:
matchLabels:
name: ingress-basic
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
- to:
- ipBlock:
cidr: 1.3.8.0/24 #unknown azure service i saw in tcpdump
- to: # Keyvault ips fetched from dns lookup
- ipBlock:
cidr: 1.2.3.4/32
- to:
- ipBlock:
cidr: 3.4.5.6/32
- to:
- ipBlock:
cidr: 34.5.4.6/32
- to:
- ipBlock:
cidr: 2.5.6.2/32 #db shard1
- to:
- ipBlock:
cidr: 2.4.5.2/32 #db shard2
- to:
- ipBlock:
cidr: 1.2.4.3/32 #db shard3
I don't know where to look at to debug the slowness, any ideas or tips ? Thanks !
Related
I created an Ingress Service as below and I am able to get response using the IP (retrieved using the kubectl get ingress command).
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: msanajyv/k8s_api
ports:
- containerPort: 80
Service File
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-clusterip-service
labels:
app: app1-nginx
spec:
type: ClusterIP
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
Ingress File
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxapp1-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
With above YAML file I am able to access the service using the ip address i.e., http://xx.xx.xx.xx/weatherforecast .
But I thought of accessing the service using a Domain name instead of IP. So I created a DNS in my azure portal and added a Record set as below.
Also I changed my Ingress file as below.
...
rules:
- host: app1.msvcloud.io
http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
When I access using the host name (http://app1.msvcloud.io/weatherforecast), the host is not getting resolved. kindly let me know what I am missing.
By create a record in your private DNS zone, you can only resolve the name (app1.msvcloud.io) within your virtual network. This means it will work if you remote into a VM within the VNET and access from there. But it will not work if you try the same outside the VNET. If you want the name to be resolvable on the world wide web, you need to buy the domain name and register in Azure DNS.
The records contained in a private DNS zone aren't resolvable from the
Internet. DNS resolution against a private DNS zone works only from
virtual networks that are linked to it.
~What is a private Azure DNS zone.
How to write network policy files to allow traffic to access the application from only few IP address (eg: 127.18.12.1, 127.19.12.3 ). I have referred the file https://github.com/ahmetb/kubernetes-network-policy-recipes but didn't find satisfactory answer. It would be great if anyone help me to write the network-policy file. I also referred Kubernetes official document for network-policy.
My sample code
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: example
spec:
podSelector:
matchLabels:
app: couchdb
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 127.18.12.1/16
ports:
- protocol: TCP
port: 8080
If you donot want to edit networkpolicy and restrict traffic from ingress for particular domain you can edit ingress with annotation liked :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: restricted-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/from-to-www-redirect: "True"
nginx.ingress.kubernetes.io/force-ssl-redirect: "True"
nginx.ingress.kubernetes.io/whitelist-source-range: "00.0.0.0, 142.12.85.524"
for network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-policy
namespace: default
spec:
podSelector:
matchLabels:
app: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com
I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.
I am trying to setup a K8s deployment where ingress's controllers can define a service as a subdomain. i.e. app1 can define itself to get traffic from app1.sub.domain.io in its ingress config.
I have a DNS A Record *.sub.domain.io that points to a Load Balancer. That load balancer is pointing to the cluster's instance group.
So if I am right all traffic that goes to anything at sub.domain.io will land inside the cluster and just need to route said traffic.
Below are the k8 configs, which has a pod, a service and an ingress. The pods are healthy and working, I believe the service isn't required but will want other pods to talk to it via internal DNS so it's added.
The ingress rules have a host app1.sub.domain.io, so in theory, curl'ing app1.sub.domain.io should follow:
DNS -> Load Balancer -> Cluster -> Ingress Controller -> Pod
At the moment when I try to hit app1.sub.domain.io it just hangs. I have tried not having service, making external-name service and doesn't work.
I don't want to go down the route of using the loadBalancer ingress as that makes a new external IP that needs to be applied to DNS records manually, or with a nasty bash script that waits for services external IP and runs GCP command, and we don't want to do this for each service.
Ref links:https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: app1
namespace: default
labels:
app: app1
spec:
replicas: 3
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: xxxx:latest
name: app1
ports:
- containerPort: 80
env:
- name: NODE_ENV
value: production
Service
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app1
type: ClusterIP
Ingress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
Once everything is deployed if you query
kubectl get pods,services,ingresses -l app=app1
NAME READY STATUS RESTARTS AGE
po/app1-6d4b9d8c5-4gcz5 1/1 Running 0 20m
po/app1-6d4b9d8c5-m4kwq 1/1 Running 0 20m
po/app1-6d4b9d8c5-rpm9l 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/app1 ClusterIP x.x.x.x <none> 80/TCP 20m
NAME HOSTS ADDRESS PORTS AGE
ing/app1-ingress app1.sub.domain.io 80 20m
----------------------------------- Update -----------------------------------
Currently doing this, not ideal. Have global static IP that's assigned to a DNS record.
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
type: NodePort
selector:
app: app1
ports:
- port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app1-static-ip
labels:
app: app1-static-ip
spec:
backend:
serviceName: app1
servicePort: 80
*.sub.domain.io should point to the IP of the Ingress.
You can use a static IP for the Ingress by following the instructions in the tutorial here: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address
Try adding path to your Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
path: /
If that doesn't work, please post the output of describe service and describe ingress.
Do you have an Ingress Controller?
Traffic should go LB-> Ingress Controller-> Ingress-> Service ClusterIP-> Pods