aks ingress address is empty - azure

I created a service call portal, then I create ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: portal-ingress
spec:
backend:
serviceName: portal
servicePort: 8080
but the address is empty:
NAME HOSTS ADDRESS PORTS AGE
portal-ingress * 80 33m

The address will remain empty in AKS ingress and that is not a problem. You can still use the external IP address of the ingress controller service as the IP.
kubectl get svc -n <namespaceinwhichnginxcontrollerisdeployed>
For example:kubectl get svc -n nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller LoadBalancer 10.23.145.21 13.15.230.190 80:31108/TCP,443:31753/TCP
You can access the ingress as http(s)://13.15.230.190/
I think there are probably ways to make the address be populated, but I did not have a need to make it populated. I hope that is not what you want but to use the exposed service.

Ok, 3 years after and k8s apis changed, but, for the records, in my (today's) case this was due to not having installed the ingress controller as described the doc:
https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

Related

AKS Ingress-Nginx ingress controller failing to route by host

I am configuring an ingress-nginx load balancer on Azure Kubernetes service. I have installed the load balancer using Helm, and set up ingress. Here is the behavior I'm encountering:
When I include a host in my pathing rules in my ingress config, I cannot access service at that host URL. The request times out
When I don't include a host in my pathing rules, I can access the service at that host URL with no issues
Regardless of whether or not the host is included in the pathing rules, I can successfully access the service at the host URL when I CURL it from any pod in the cluster.
Nslookup successfully resolves the host on my machine
I'm trying to figure out why I'm unable to reach my service when host is included in my ingress configuration. Any ideas? Technical details are below.
Note that the configuration is only pointing to one service currently, but filtering by host will eventually be necessary - I'm planning to have multiple services with different domains running through this load balancer.
Ingress controller configuration:
helm install --replace ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=127.0.0.1 \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \
--set controller.service.loadBalancerIP=$IP \
The ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
- hosts:
- my.host.com
secretName: tls-secret
rules:
- host: my.host.com //Removing this item makes the service reachable
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: xrcfrontend
port:
number: 80
This is the curl command I'm running. It returns the correct results when run inside the pod, and times out when run outside.
curl https://my.host.com --insecure
If you are using AKS v>=1.24, then try adding below annotation with path /healthz instead of 127.0.0.1 during nginx ingress controller installation or in nginx ingress controller service and use host based routing with nginx ingress routes -
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz
If the above helps then
Why was it not working with host earlier?
because backend pool of LB goes unhealthy because of wrong health-probe path of ingress controller. Ingress route is only accepting traffic for the particular host name and hence health probe of ingress controller service(Azure LB) is failing because / or 127.0.0.1 for http protocol returns 404.
Github discussion on changes - https://github.com/Azure/AKS/issues/2903#issuecomment-1115720970
More details on installation - https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration

AKS Ingress controller DNS gives 404 error

I have created aks cluster with 2 services exposed using Ingress controller
below is the yml file for ingress controller with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xyz-office-ingress02
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- office01.xyz.com
secretName: tls-office-secret
rules:
- host: office01.xyz.com
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: office-webapp
port:
number: 80
- path: /api/
pathType: Prefix
backend:
service:
name: xyz-office-api
port:
number: 80
kubenctl describe ing
Name: xyz-office-ingress02
Labels: <none>
Namespace: default
Address: <EXTERNAL Public IP>
Ingress Class: <none>
Default backend: <default>
TLS:
tls-office-secret terminates office01.xyz.com
Rules:
Host Path Backends
---- ---- --------
*
/(/|$)(.*) office-webapp:80 (10.244.1.18:80,10.244.2.16:80)
/api/ xyz-office-api:80 (10.244.0.14:8000,10.244.1.19:8000)
Annotations: cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
On IP i am able to access both services, however when using the DNS it is not working and gives 404 error
Cleaning up remarks from comments: basically, the issue is with the ingress rules definition. We have the following:
rules:
- host: office01.xyz.com
- http:
paths:
...
We know connecting to ingress directly does work, without using DNS. While when querying it through DNS: we get a 404.
The reason for this 404 is that, when entering with a DNS name, you enter the first rules. In which you did not define any backend.
One way to fix this would be to relocate the "host" part of that ingress with your http rules, eg:
spec:
tls:
...
rules:
- host: office01.xyz.com
http: #no "-", not a new entry => http & host belong to a single rule
paths:
- path: /(/|$)(.*)
...
- path: /api/
...
I tried to reproduce the same issue in my environment and got the below results
I have created the dns zone for the cluster
Created the namespace
kubectl create namespace ingress-basic
I have installed the helm repo and used the helm to deploy the ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace <namespace-name> \
--set controller.replicaCount=2
When I check in the logs I am able to see public IP with load balancer
I have created the some role assignments to connect the DNS zones
Assigned the managed identity of cluster node pool DNS contributor rights to the domain zone
az role assignment create --assignee $UserClientId --role 'DNS Zone Contributor' --scope $DNSID
I have run some helm commands to deploy the dns zones
helm install external-dns bitnami/external-dns --namespace ingress-basic --set provider=azure --set txtOwnerId=<cluster-name> --set policy=sync --set azure.resourceGroup=<rg-name> --set azure.tenantId=<tenant-id> --set azure.subscriptionId=<sub-id> --set azure.useManagedIdentityExtension=true --set azure.userAssignedIdentityID=<UserClient-Id>
I have installed the cert manager using helm
helm install cert-manager jetstack/cert-manager \
--namespace ingress-basic \
--version vXXXX
I have created and run the application
vi nginxfile.yaml
kubectl apply -f file.yaml
I have created the ingress route it will configure the traffic to the application
After that we have to verify the certificates it will create or not and wait for few minutes it will update the DNS zone
I have created the cert manager and deployed that cluster
kubectl apply -f file.yaml --namespace ingress-basic
Please find this url for Reference for more details

How we can access kubernetes ingress controller IP using https?

I have deployed application in Azure Kubernetes (AKS). I have used ingress-controller for my POC. Previously I was using domain (saurabh.com). I am able to access saurabh.com through https.
Now what I want is that I want to access my application using IP address with https.
My ingress controller yaml files looks like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: saurabh-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: saurabh-ui
port:
number: 4200
By doing this, I am able to access my application using IP but its coming http not https. Can someone please help me with this. I want to access my application IP through https.
Note: I have installed the certificates. When I am trying to access domain using saurabh.com, its coming with https.
Thanks in advance.
I tried reproduce the issue in my environment and got the below results
Please use this link to access the files
I have created the namespace
kubectl create namespace namespace_name
Created the applications and deployed into the kubernetes
kubectl apply -f filename.yaml
To check the namespaces which are created and we can get the IP address using below command
kubectl get svc -n namespace_name
I have installed the helm chat for controller and deployed the ingress resource into the kubernetes
NOTE: After installing the nginx controller we have to change the Cluster IP to LoadBalancer.
Here I have enabled HTTPS in AKS using cert manager, it will automatically generate and configured
I have created the namespace for cert manager
kubectl create namespace namespace_name
kubectl get svc namespace namespace_name
I have installed the cert manager using helm using below command
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v0.14.0 \
--set installCRDs=true
To check the cert manager namespace
kubectl get pods --namespace cert-manager
I have created the cluster issuer and deployed
vi filename.yaml
kubectl apply --namespace app -f filename.yaml
I have created and installed the TLS or SSL certificates
kubectl apply --namespace app -f filename.yaml
We can verify that the certificate is created or not using below command
Here it will show the certificate is created or not
kubectl describe cert app-web-cert --namespace namespace_name
Check the service using below command
kubectl get services -n app
Test the app with HTTPS: https://hostname with IP address
Here we can also check the certificates which we have added.

How can I expose a service to other pods in kubernetes?

I have a simple service
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
And here is how my cluster looks like. Pretty simple.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-shell-95cb5df57-cdj4z 1/1 Running 0 23m 10.60.1.32 aks-nodepool-19248108-0 <none> <none>
nginx-deployment-76bf4969df-58d66 1/1 Running 0 36m 10.60.1.10 aks-nodepool-19248108-0 <none> <none>
nginx-deployment-76bf4969df-jfkq7 1/1 Running 0 36m 10.60.1.21 aks-nodepool-19248108-0 <none> <none>
$kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
internal-ingress LoadBalancer 10.0.0.194 10.60.1.35 80:30157/TCP 5m28s app=nginx-deployment
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 147m <none>
$kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
my-shell-95cb5df57 1 1 1 23m my-shell ubuntu pod-template-hash=95cb5df57,run=my-shell
nginx-deployment-76bf4969df 2 2 2 37m nginx nginx:1.7.9 app=nginx,pod-template-hash=76bf4969df
I see I have 2 pods wiht my nginx app. I want to be able to send a request from any other new pod to either one of them. If one crashes, I want to still be able to send this request.
In the past I used a load balancer for this. The problem with load balancers is that they open up a public IP and int this specific scenario, I don't want a public IP anymore. I want this service to be invoked by other pods directly, without a public IP.
I tried to use an internal load balancer.
apiVersion: v1
kind: Service
metadata:
name: internal-ingress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "my-subnet"
spec:
type: LoadBalancer
loadBalancerIP: 10.60.1.45
ports:
- port: 80
selector:
app: nginx-deployment
The problem is that it does not get an IP in my 10.60.0.0/16 network like it is described here: https://learn.microsoft.com/en-us/azure/aks/internal-lb#specify-a-different-subnet
I get this never ending <pending>.
kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
internal-ingress LoadBalancer 10.0.0.230 <pending> 80:30638/TCP 15s app=nginx-deployment
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 136m <none>
What am I missing? How to troubleshoot? Is it even possible to have pod to service communication?
From the message you provide, it seems you want to use a special private IP address which is in the subnet that the same as the AKS cluster use. I think the possible reason is that the special IP address which you want to use is already assigned by the AKS, it means you cannot use it.
Troubleshooting
So you need to guide to the Vnet which your AKS cluster used and check if the IP address is already in use. Here is the screenshot:
Solution
Choose an IP address that is not assigned by the AKS from the subnet the AKS used. Or do not use a special one, let the AKS assign your load balancer dynamic. Then change your YAML file like below:
apiVersion: v1
kind: Service
metadata:
name: internal-ingress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx-deployment
Use a ClusterIP Service (the default type) which creates only a cluster-internal IP and no public IP:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
Then you can access the Service (and thus the Pods behind it) from any other Pod in the same namespace by using the Service name as the DNS name:
curl nginx-service
If the Pod from which you want to access the Service is in a different namespace, you have to use the fully qualified domain name of the Service:
curl nginx-service.my-namespace.svc.cluster.local

Unable to assign public ip address to AKS: pending forever

I allocated an IP address for my resource group as the following:
az network public-ip create --resource-group myResourceGroup --name ipName --allocation-method static
Now, I'd like to assign it to my AKS so I just altered the yaml as it follows:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
loadBalancerIP: xx.xx.xxx.xxx <--the ip generated before
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx-sgr
Then I run:
kubectl apply -f mykube.yaml
But it appears to be stuck:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.0.77.234 <pending> 80:32320/TCP 79m
By executing describe I get indeed the following:
Warning CreatingLoadBalancerFailed 21m (x19 over 86m) service-controller Error creating
load balancer (will retry): failed to ensure
load balancer for service default/nginx: user supplied IP Address
xx.xx.xxx.xxx was not found in resource group
MC_**myResourceGroup**_myAKSCluster_westeurope
please note that it seems it's searching in a resource group that is composed by the resource group I specified in the first command (the same as kubernates is) and other information...what am I doing wrong?
As I know, the possible reason is that you need to assign your AKS the permission of the resource group which you create the public IP if you create it in another group. For more details, see Use a static IP address outside of the node resource group. And you need to add the annotations like below:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
name: azure-load-balancer
spec:
loadBalancerIP: 40.121.183.52
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-load-balancer
Or you can just create the public IP in your AKS cluster nodes group. For you, the group name can be found in the error you provide: MC_**myResourceGroup**_myAKSCluster_westeurope.

Resources