Additional Domain Name For Ingress Controller Azure Kubernetes - azure

Currently I have an ingress controller that responds to a set of ingress rules . I use this setup as a defacto API gateway that exposes my different services to the internet.
I have an azure domain dev-APIGateway.northeurope.cloudapp.azure.com set to the ingress controller, I set this up via the public ip address via the azure portal
Now to my problem - I want to have an additional domain name that will resolve to the same ingress controller (dev-frontend.northeurope.cloudapp.azure.com) that will satisfy the frontend ingress definition seen below.
I know I can achieve this by creating an additional ingress controller with its own ip address and domain but this seems redundant. See Ingress definitions
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-someapi
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: dev-APIGateway.northeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: someServiceA-service
servicePort: 80
path: /someServiceA/(.*)
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-frontend
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: dev-frontend.northeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 80
path: /
If I had my own domain this would be fine - I wouldy simply add my A name records for the domains that I own along with their CNAMES and point them to ingress controller IP address and that would be it, but this is not the case as im stuck using azure domain names.

Related

Ingress Nginx external IP set not working

I'm trying to make Ingress use external IP i have created in Azure
First I have created an IP in the portal and added my AKS service as network contributor, then added it in the values file used by HELM
# -- List of IP addresses at which the controller services are available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: ["20.124.63.xxx"]
# -- Used by cloud providers to connect the resulting `LoadBalancer` to a pre-existing static IP according to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
loadBalancerIP: ""
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
But after deployment, my ingress gets two external IPs, and the one set by me does not work at all, only automatically generated works:
My config looks like this, so I think running this as loadbalancer is not exactly possible:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- host: xxx.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: aks-one
port:
number: 80
- host: xxx.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: aks-two
port:
number: 80
I would like to use static IP I have created to access my Ingress, what should I do to achieve that?
Exposing the Service of your ingress controller with your public ip can be done like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
Azure now will spin-up a LoadBalancer with your public IP.
The Ingress Controller then will route incoming traffic to your apps with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80

Ingress rule cannot resolve the backend server ip address on Azure AKS

I installed the ingress controller on my aks using the helm install. I also created an ingress rule for my service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-demo-ingress
namespace: my-demo
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: mydemoingress.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 8080
When I deployed above ingress rule, i notice that my Backends has no IP as seen below api-gateway:8080 ():
**kubectl describe ing my-demo-ingress -n my-demo**
Name: my-demo-ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: my-demo
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
mydemoingress.com
/(.*) **api-gateway:8080 (<none>)**
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
No IP address gets assigned to the ingress controller.
When i however try this same setup on my local k3s setup, the IP is assigned correctly. Please what am i doing wrong?
Update: Helm install command for ingress controller:
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
• AFAIK, the syntax used for the ‘servicePort’ and the ‘serviceName’ should be as per given in the below sample ‘YAML’ file. Also, the path to the specified service name might be missing as per the YAML file that you have shared due to which while provisioning the service in the AKS cluster, the port mapping might not be correct and hence, the internal load balancer could not reach out to the created service.
Sample YAML file: -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
- backend:
serviceName: ingress-demo
servicePort: 80
path: /hello-world-two(/|$)(.*)
• Thus, apart from the above-stated modifications, I would also suggest you to please check whether you have assigned an IP address that is not in use in your virtual network and that you have deployed a load balancer using that IP address in AKS cluster as below: -
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
These modifications should help you resolve your issue with the backend IP address pools.
Also, do refer the below link for more information: -
https://microsoft.github.io/AzureTipsAndTricks/blog/tip253.html

Azure Application Gateway Ingress Controller not reaching Service(CLusterIP)

Here I have explained the scenario. I can reach a clusterIP using nginx ingress But I can't reach the same service using Azure Application Gateway Ingress. Bellow annotation is not helping me
appgw.ingress.kubernetes.io/rewrite-target: /
Any Idea?
Make sure you add below annotations to example-ingress.
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
You can see the full list and examples here.
You were using wrong annotation. I have updated your ingress with correct annotation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
rules:
- http:
paths:
- path: /apple/*
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
Checkout all the AGIC annotations here

Install two traefik ingress controller on same kubernetes Cluster

I have a situation, where I am planning to use two separate traefik ingress controller inside the Kubernetes cluster.
I have a few URLs which I want to be accessible through VPN only, and few which can be publicly accessible.
In the current architecture, I have one traefik-ingress controller, and two separate ALBs, one internal and one internet-facing, both pointing to traefik.
Let's say, I have a URL public.example.com and private.example.com. public.example.com is pointing to internet-facing ALB, and private.example.com is pointing to internal ALB. But what if someone get to know the pointing of public.example.com and points private.example.com to same pointing in his /etc/hosts, he will be able to access my private website.
To avoid this, I am planning to run two separate traefik-ingress-controller, one which will be serving only private URL and one public URL. Can this be done? Or is there any other way to avoid this
To deploy two separate traefik-ingress controller, to serve private and public traffic separately, I used kubernetes.ingressclass=traefik args.
This is what documentation has to say for kubernetes.ingressclass:
--kubernetes.ingressclass Value of kubernetes.io/ingress.class annotation to watch for
I created two deployment, having separate value for kubernetes.ingressclass.
One with kubernetes.ingressclass=traefik, which was behind a public ALB and kubernetes.ingressclass=traefik-internal, which was behind a private/internal ALB
For services, which I want to serve privately, I use the following annotations in ingress objects :
annotations:
kubernetes.io/ingress.class: traefik-internal
and for public
annotations:
kubernetes.io/ingress.class: traefik
My deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-internal-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-internal-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-internal-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-internal-ingress-lb
spec:
serviceAccountName: traefik-internal-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7
name: traefik-internal-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --accesslog=true
- --kubernetes.ingressclass=traefik-internal ##this makes it to watch only for ingress objects with annotaion "kubernetes.io/ingress.class: traefik-internal"
Hope this helps someone.
You can achieve this with a single Ingress controller inside the cluster but by creating various Ingress Kubernetes Objects.
For Private site:-
consider whitelist-source-range annotation in the ingress resource.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
For Public site:-
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
Multiple Træfik Deployments can run concurrently in the same cluster.For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.
For such cases, it is advisable to classify Ingress objects through a label and configure the labelSelector option per each Træfik Deployment accordingly. To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a traffic-type: internal label while objects designated for external traffic receive a traffic-type: external label. The label selectors on the Træfik Deployments would then be traffic-type=internal and traffic-type=external, respectively.

Invalid host header and default backend 404 with Kubernetes ingress controller

Accessing my nodejs/react site using the URL displays "Invalid Host header". Accessing it through the public IP displays "default backend - 404".
I am using Kubernetes nginx controller with Azure cloud and load balancer.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myrule
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- mysite.uknorth.cloudapp.azure.com
secretName: tls-secret
rules:
- host: mysite.uknorth.cloudapp.azure.com
http:
paths:
- backend:
serviceName: service-ui
servicePort: 8080
path: /
- backend:
serviceName: service-api
servicePort: 8999
path: /api
Any guidance appreciated.
So let's assume the SSL part is ok (link) since you can reach the nginx ingress controller.
Your rewrite annotation is not necessary for what you need. Take a look at these rules:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myrule
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
tls:
- hosts:
- mysite.uknorth.cloudapp.azure.com
secretName: tls-secret
rules:
- host: mysite.uknorth.cloudapp.azure.com
http:
paths:
- backend:
serviceName: service-ui
servicePort: 8080
path: /
- backend:
serviceName: service-api
servicePort: 8999
path: /api
Whatever you send to /api/.* will be redirected to service-api. And whatever you send to / will be send to service-ui.
Thanks for your feedback. It turns out the problem was not with the ingress rule above. The service-ui was running the incorrect command parameters thus not acknowledging the request. I missed the fact that the service-api was responding correctly.
In short, check the endpoints and running services are configured correctly - more a lesson for me than anyone else. I received a response by curling the service locally but that didn't mean it could handle https requests over ingress as the service was configured incorrectly.
Also, another lesson for me, ask the developers if the correct image is being used for the build. And ask them again if they say yes.

Resources