Azure AKS and Application Gateway returning 404 - azure

I have a AKS cluster deployed with an Application Gateway. These are all docker images running on the AKS cluster with a simple ingress. They all run on the same default namespace. One is a Vue frontend, the second is a Java spring backend, the last being a Fullstack Tomcat image. All three services ran completely fine without any issues; however, today all of the 3 services gave back a 404 error (without any changes to the cluster to my knowledge).
When testing for the health of these services, I was still able to kubectl to all the services. In addition, the Health Probe on Azure Application Gateway returned a healthy 200 status code for all three services.
I have tried removing the workloads and adding them back again. I have tried removing the services then adding them back again. I have done the same for the ingress as well.
I have also tried setting up the entire cluster from a fresh AKS cluster from scratch from a new AKS cluster, all three images perform without any issues.
The Yaml for a application on the AKS network looks like:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: testvuefe
spec:
replicas: 2
selector:
matchLabels:
app: testvuefe
template:
metadata:
labels:
app: testvuefe
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: testvuefe
image: testdocker.azurecr.io/testvuefe:latest
ports:
- containerPort: 80
resources:
requests:
cpu: '0'
memory: '0'
limits:
cpu: '128'
memory: 512G
volumeMounts:
- mountPath: "/fileShare"
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: aks-azurefile
- apiVersion: v1
kind: Service
metadata:
name: testvuefe-service
spec:
type: ClusterIP
ports:
- targetPort: 80
name: port80
port: 80
protocol: TCP
selector:
app: testvuefe
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testvuefe-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: /user
appgw.ingress.kubernetes.io/cookie-based-affinity: "true"
spec:
rules:
- host: test.wbsoft.co.kr
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: testvuefe-service
port:
number: 80
As this issue as occurred once on a cluster, I need to understand what is causing these issues so that it does not happen during production. It would also be great if a solution can be found for the issue.

Related

Azure Kubernetes Ingress : 502 Bad Gateway while using Path inside ingress configuration

I am facing the 502 Bad gateway issue in my Application Gateway.
I am using Azure Kubernetes Service to deploy my cluster which is connected to Ingress Application Gateway.
Configuration Files:
kube-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
namespace: en02
labels:
app: myApp
spec:
selector:
matchLabels:
app: myApp
replicas: 1
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: somecr.azurecr.io/myApp:1.0.0.30
resources:
limits:
memory: "64Mi"
cpu: "100m"
ports:
- containerPort: 5100
env:
- name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
value: "Microsoft.AspNetCore.ApplicationInsights.HostingStartup"
- name: "ApplicationInsights__ConnectionString"
value: "myKey"
---
apiVersion: v1
kind: Service
metadata:
namespace: en02
name: myApp
spec:
selector:
app: myApp
ports:
- port: 30153
targetPort: 5100
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: en02
name: etopia
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: "/api/home"
spec:
rules:
- http:
paths:
- path: /myApp/
backend:
service:
name: myApp
port:
number: 30153
pathType: Exact
Result of
kubectl describe ingress -n en02
Name: ingress
Labels: <none>
Namespace: en02
Address: public-ip
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/myApp/ myApp:30153 (10.0.0.106:5100)
Annotations: appgw.ingress.kubernetes.io/health-probe-path: /api/home
kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
I am getting expected results from 10.0.0.106:5100/api/home and Application Gateway health status is 200.
No matter what I do, I always get Bad Gateway error, I was able to access a sample app on port 80 (where the ingress path was /) but if I specify anything in ingress path (/cashify/) it always give me bad gateway.
I tried adding readinessProbe to container but it doesn't work (However I am already getting 200 under application gateway health status).
Please help.
Please check if below can be worked around.
Please try to update deployment yaml to use wild card path specification to access apis with different paths.
deployment.yml
apiVersion: extensions/v1beta1
kind: Ingress
....
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: xxx
http:
paths:
- path: /api/* #wild card path
backend:
serviceName: apiservice
servicePort: 80
- backend:
.....
servicePort: 80
Note from MS docs: If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a
different status code as Healthy, configure a custom probe and
associate it with the HTTP settings.
As you said you have defined readinessProbe , please check if path of those probes is correct.
Check same with 1. livenessProbe 2. readinessProbe
Also please note that readinessProbe and livenessProbe are supported when configured with httpGet.
References:
bad request - path based routing · kubernetes-ingress · GitHub
application-gateway-troubleshooting-502

Can't access an application deployed on AKS

I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation

web application running on k8s cluster giving null when using request.getCookies() or request.getSession()

I am trying to run web application developed using Java, Jsp, Servlet, Angularjs and Jquery on K8s cluster.
While login into the application line which have request.getCookies() or request.getSession() hit, it is returning null and then it will throw NullPointerException. This exception is not allow me to login in to the application.
I have tried running the same application in local machine and on azure using docker and it is working fine. This confirms there is no issue in image . Following command used to run on docker,
docker run -p 8080:8080 <image_name>
I am using k8s on Azure(Azure kubernates service) and following is the configuration I have used.
apiVersion: v1
kind: Service
metadata:
labels:
app: app
namespace: default
name: app
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: http
selector:
app: app
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
strategy:
type: Recreate
template:
metadata:
labels:
app: app
spec:
containers:
- image: <image_name>
name: app
imagePullPolicy: Always
ports:
- containerPort: 8080
name: app
All the pods, services and ingress are running normally without restarting or throwing any exception.
I tried creating ingress as well, but issue is still same. Following configuration I have used to create an ingress and also changed service.spec.type to NodePort before deploying the ingress.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gateway-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app/*
backend:
serviceName: app
servicePort: 8080
Please guide here how can we run applications with sessions and cookies on k8s cluster.

NodeJS Service is not reachable on Kubernete (GCP)

I have a true roadblock here and I have not found any solutions so far. Ultimately, my deployed NodeJS + Express server is not reachable when deploying to a Kubernete cluster on GCP. I followed the guide & example, nothing seems to work.
The cluster, node and service are running just fine and don't have any issues. Furthermore, it works just fine locally when running it with Docker.
Here's my Node YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-08-06T04:13:29Z
generation: 1
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "23861"
selfLink: /apis/apps/v1/namespaces/default/deployments/nodejsapp
uid: 8b6b7ac5-b800-11e9-816e-42010a9600de
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nodejsapp
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nodejsapp
spec:
containers:
- image: gcr.io/${project}/nodejsapp:latest
imagePullPolicy: Always
name: nodejsapp
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-08-06T04:13:29Z
lastUpdateTime: 2019-08-06T04:13:29Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Service YAML:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-08-06T04:13:34Z
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "25444"
selfLink: /api/v1/namespaces/default/services/nodejsapp
uid: 8ef81536-b800-11e9-816e-42010a9600de
spec:
clusterIP: XXX.XXX.XXX.XXX
externalTrafficPolicy: Cluster
ports:
- nodePort: 32393
port: 80
protocol: TCP
targetPort: 5000
selector:
run: nodejsapp
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX.XXX
The NodeJS server is configured to run on Port 5000. I tried doing no port-forwarding as well but not a difference in the result.
Any help is much appreciated.
UPDATE:
I used this guide and followed the instructions: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
UPDATE 2:
FINALLY - figured it out. I'm not sure why this is not mentioned anywhere but you have to create an Ingress that routes the traffic to the pod accordingly.
Here's the example config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-32064--abfe1f07378017e9":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/target-proxy: k8s-tp-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/url-map: k8s-um-default-nodejsapp--abfe1f07378017e9
creationTimestamp: 2019-08-06T18:59:15Z
generation: 1
name: nodejsapp
namespace: default
resourceVersion: "171168"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/versapay-api
uid: 491cd248-b87c-11e9-816e-42010a9600de
spec:
backend:
serviceName: nodejsapp
servicePort: 80
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX
Adding it as an answer as need to include image (But not necessarily an answer):
As shown in the image, besides your backend service, a green tick should be visible
Probable Solution:
In your NodeJsApp, please add the following base URL .i.e.,
When the application is started locally, http://localhost:5000/ should return a 200 status code (With ideally Server is running... or some message)
And also, if path based routing is enabled, another base URL is also required:
http://localhost:5000/<nodeJsAppUrl>/ should also return 200 status code.
Above URLs are required for health check of both LoadBalancer and Backend Service and redeploy the service.
Please let me know if the above solution doesn't fix the said issue.
You need an intermediate service to internally expose your deployment.
Right now, you have a set of pods grouped in a deployment and a load balancer exposed in your cluster but you need to link them with an additional service.
You can try using a NodePort like the following:
apiVersion: v1
kind: Service
metadata:
name: nodejsapp-nodeport
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 32393
targetPort: 5000
type: NodePort
This NodePort service is in between your Load Balancer and the pods in your deployment, targeting them in port 5000 and exposing port 32393 (as per your settings in the original question, you can change it).
From here, you can redeploy your Load Balancer to target the previous NodePort. This way, you can reach your NodeJS app via port 80 from your load balancer public address.
apiVersion: v1
kind: Service
metadata:
name: nodejs-lb
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 80
targetPort: 32393
type: LoadBalancer
The whole scenario would look like this:
publicy exposed address --> LoadBalancer --> | NodePort --> Deployment --> Pods

Azure Kubernetes Services 502 Bad Gateway

I have AKS cluster up and running and on a heavy user load I get some 502 bad gateway responses. This only happens when the request load is high. I used the Azure DevOps load testing to achieve this behavior. I believe that it has something to do with the Load Balancer timeouts but I am not too sure how to go about debugging this. Perhaps I should be checking logs somehwere? Searching around google tells me that i should be checking nginx logs but not sure where to find those. Sorry I am newbie in kubernettes world.
These are all the pods that are in the cluster. apsever-api-... are my actual apps that serve the request:
The YAML file used to generate this:
# DS for AP
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: apserver-api
spec:
updateStrategy:
type: RollingUpdate
selector:
template:
metadata:
labels:
app: apserver-api
spec:
containers:
- name: apserver-api
image: IMAGE
env:
- name: APP_SVC
value: apserver-api
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
# Service for AP
kind: Service
apiVersion: v1
metadata:
labels:
app: apserver-api
name: apserver-api
spec:
type: ClusterIP
ports:
- name: http
port: 80
- name: https
port: 443
targetPort: 80
selector:
app: apserver-api
type: "LoadBalancer"
and screenshot of the load test:

Resources