I installed the ingress controller on my aks using the helm install. I also created an ingress rule for my service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-demo-ingress
namespace: my-demo
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: mydemoingress.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 8080
When I deployed above ingress rule, i notice that my Backends has no IP as seen below api-gateway:8080 ():
**kubectl describe ing my-demo-ingress -n my-demo**
Name: my-demo-ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: my-demo
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
mydemoingress.com
/(.*) **api-gateway:8080 (<none>)**
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
No IP address gets assigned to the ingress controller.
When i however try this same setup on my local k3s setup, the IP is assigned correctly. Please what am i doing wrong?
Update: Helm install command for ingress controller:
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
• AFAIK, the syntax used for the ‘servicePort’ and the ‘serviceName’ should be as per given in the below sample ‘YAML’ file. Also, the path to the specified service name might be missing as per the YAML file that you have shared due to which while provisioning the service in the AKS cluster, the port mapping might not be correct and hence, the internal load balancer could not reach out to the created service.
Sample YAML file: -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
- backend:
serviceName: ingress-demo
servicePort: 80
path: /hello-world-two(/|$)(.*)
• Thus, apart from the above-stated modifications, I would also suggest you to please check whether you have assigned an IP address that is not in use in your virtual network and that you have deployed a load balancer using that IP address in AKS cluster as below: -
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
These modifications should help you resolve your issue with the backend IP address pools.
Also, do refer the below link for more information: -
https://microsoft.github.io/AzureTipsAndTricks/blog/tip253.html
Related
I have created aks cluster with 2 services exposed using Ingress controller
below is the yml file for ingress controller with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xyz-office-ingress02
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- office01.xyz.com
secretName: tls-office-secret
rules:
- host: office01.xyz.com
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: office-webapp
port:
number: 80
- path: /api/
pathType: Prefix
backend:
service:
name: xyz-office-api
port:
number: 80
kubenctl describe ing
Name: xyz-office-ingress02
Labels: <none>
Namespace: default
Address: <EXTERNAL Public IP>
Ingress Class: <none>
Default backend: <default>
TLS:
tls-office-secret terminates office01.xyz.com
Rules:
Host Path Backends
---- ---- --------
*
/(/|$)(.*) office-webapp:80 (10.244.1.18:80,10.244.2.16:80)
/api/ xyz-office-api:80 (10.244.0.14:8000,10.244.1.19:8000)
Annotations: cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
On IP i am able to access both services, however when using the DNS it is not working and gives 404 error
Cleaning up remarks from comments: basically, the issue is with the ingress rules definition. We have the following:
rules:
- host: office01.xyz.com
- http:
paths:
...
We know connecting to ingress directly does work, without using DNS. While when querying it through DNS: we get a 404.
The reason for this 404 is that, when entering with a DNS name, you enter the first rules. In which you did not define any backend.
One way to fix this would be to relocate the "host" part of that ingress with your http rules, eg:
spec:
tls:
...
rules:
- host: office01.xyz.com
http: #no "-", not a new entry => http & host belong to a single rule
paths:
- path: /(/|$)(.*)
...
- path: /api/
...
I tried to reproduce the same issue in my environment and got the below results
I have created the dns zone for the cluster
Created the namespace
kubectl create namespace ingress-basic
I have installed the helm repo and used the helm to deploy the ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace <namespace-name> \
--set controller.replicaCount=2
When I check in the logs I am able to see public IP with load balancer
I have created the some role assignments to connect the DNS zones
Assigned the managed identity of cluster node pool DNS contributor rights to the domain zone
az role assignment create --assignee $UserClientId --role 'DNS Zone Contributor' --scope $DNSID
I have run some helm commands to deploy the dns zones
helm install external-dns bitnami/external-dns --namespace ingress-basic --set provider=azure --set txtOwnerId=<cluster-name> --set policy=sync --set azure.resourceGroup=<rg-name> --set azure.tenantId=<tenant-id> --set azure.subscriptionId=<sub-id> --set azure.useManagedIdentityExtension=true --set azure.userAssignedIdentityID=<UserClient-Id>
I have installed the cert manager using helm
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \
--version vXXXX
I have created and run the application
vi nginxfile.yaml
kubectl apply -f file.yaml
I have created the ingress route it will configure the traffic to the application
After that we have to verify the certificates it will create or not and wait for few minutes it will update the DNS zone
I have created the cert manager and deployed that cluster
kubectl apply -f file.yaml --namespace ingress-basic
Please find this url for Reference for more details
I'm trying to make Ingress use external IP i have created in Azure
First I have created an IP in the portal and added my AKS service as network contributor, then added it in the values file used by HELM
# -- List of IP addresses at which the controller services are available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: ["20.124.63.xxx"]
# -- Used by cloud providers to connect the resulting `LoadBalancer` to a pre-existing static IP according to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
loadBalancerIP: ""
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
But after deployment, my ingress gets two external IPs, and the one set by me does not work at all, only automatically generated works:
My config looks like this, so I think running this as loadbalancer is not exactly possible:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- host: xxx.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: aks-one
port:
number: 80
- host: xxx.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: aks-two
port:
number: 80
I would like to use static IP I have created to access my Ingress, what should I do to achieve that?
Exposing the Service of your ingress controller with your public ip can be done like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
Azure now will spin-up a LoadBalancer with your public IP.
The Ingress Controller then will route incoming traffic to your apps with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
I am facing the 502 Bad gateway issue in my Application Gateway.
I am using Azure Kubernetes Service to deploy my cluster which is connected to Ingress Application Gateway.
Configuration Files:
kube-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
namespace: en02
labels:
app: myApp
spec:
selector:
matchLabels:
app: myApp
replicas: 1
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: somecr.azurecr.io/myApp:1.0.0.30
resources:
limits:
memory: "64Mi"
cpu: "100m"
ports:
- containerPort: 5100
env:
- name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
value: "Microsoft.AspNetCore.ApplicationInsights.HostingStartup"
- name: "ApplicationInsights__ConnectionString"
value: "myKey"
---
apiVersion: v1
kind: Service
metadata:
namespace: en02
name: myApp
spec:
selector:
app: myApp
ports:
- port: 30153
targetPort: 5100
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: en02
name: etopia
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: "/api/home"
spec:
rules:
- http:
paths:
- path: /myApp/
backend:
service:
name: myApp
port:
number: 30153
pathType: Exact
Result of
kubectl describe ingress -n en02
Name: ingress
Labels: <none>
Namespace: en02
Address: public-ip
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/myApp/ myApp:30153 (10.0.0.106:5100)
Annotations: appgw.ingress.kubernetes.io/health-probe-path: /api/home
kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
I am getting expected results from 10.0.0.106:5100/api/home and Application Gateway health status is 200.
No matter what I do, I always get Bad Gateway error, I was able to access a sample app on port 80 (where the ingress path was /) but if I specify anything in ingress path (/cashify/) it always give me bad gateway.
I tried adding readinessProbe to container but it doesn't work (However I am already getting 200 under application gateway health status).
Please help.
Please check if below can be worked around.
Please try to update deployment yaml to use wild card path specification to access apis with different paths.
deployment.yml
apiVersion: extensions/v1beta1
kind: Ingress
....
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: xxx
http:
paths:
- path: /api/* #wild card path
backend:
serviceName: apiservice
servicePort: 80
- backend:
.....
servicePort: 80
Note from MS docs: If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a
different status code as Healthy, configure a custom probe and
associate it with the HTTP settings.
As you said you have defined readinessProbe , please check if path of those probes is correct.
Check same with 1. livenessProbe 2. readinessProbe
Also please note that readinessProbe and livenessProbe are supported when configured with httpGet.
References:
bad request - path based routing · kubernetes-ingress · GitHub
application-gateway-troubleshooting-502
Currently I have an ingress controller that responds to a set of ingress rules . I use this setup as a defacto API gateway that exposes my different services to the internet.
I have an azure domain dev-APIGateway.northeurope.cloudapp.azure.com set to the ingress controller, I set this up via the public ip address via the azure portal
Now to my problem - I want to have an additional domain name that will resolve to the same ingress controller (dev-frontend.northeurope.cloudapp.azure.com) that will satisfy the frontend ingress definition seen below.
I know I can achieve this by creating an additional ingress controller with its own ip address and domain but this seems redundant. See Ingress definitions
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-someapi
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: dev-APIGateway.northeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: someServiceA-service
servicePort: 80
path: /someServiceA/(.*)
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-frontend
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: dev-frontend.northeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 80
path: /
If I had my own domain this would be fine - I wouldy simply add my A name records for the domains that I own along with their CNAMES and point them to ingress controller IP address and that would be it, but this is not the case as im stuck using azure domain names.
I want to make services accessible from outside the K8 cluster using an ingress controller. Following 5.5 from the Kubernetes Cookbook, I ran this manifest:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-public
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /web
backend:
serviceName: nginx
servicePort: 80
The Ingress object is visible in the Kubernetes dashboard; but it does not have an assigned endpoint:
Output of kubectl get ing:
NAME HOSTS ADDRESS PORTS AGE
nginx-public * 80 54m
update
Running kubectl describe ingress nginx-public gives:
Name: nginx-public
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/web nginx:80 (<none>)
Annotations:
ingress.kubernetes.io/rewrite-target: /
Events: <none>
Actually this is an issue with Kubernetes Dashboard, we have the same issue.
Even if it isn't displayed it doesn't mean your ingress isn't working. First you should check the ingress with kubectl (kubectl describe ingress nginx-public) and verify that the output is smiliar to this:
Name: test-ingress
Namespace: test
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
test-ssl-secret terminates test.myorg.com
Rules:
Host Path Backends
---- ---- --------
test.myorg.com
/ test-service:80 (<none>)
Afterwards you should verify your service is reachable via your specified host.
Update:
Depending on the service in front of your ingress-controller your service should be reachable via http://{serverip}:{nodeport-http-port}/web in case your service is of type NodePort(you will get 2 external ports in the 30000-39999 range, one is the http port the other the https port) or http://{address-from-external-loadbalancer}/web if the service is of type LoadBalancer.
2nd-Update
After some further investigation about the issue i stumbled upon a bug issue of kubernetes-dashboard stating that it's indeed possible to show the endpoints of ingress. The problem actually isn't caused by the dashboard, but a missing parameter on the ingress deployment.
For nginx-ingress-controller its the following:
NGINX Ingress CLI arguments
The missing option is --publish-service
If you used helm to deploy the controller you need to add the parameter --set controller.publishService.enabled=true