I need help in AGIC configuration. I am using Loadbalancer service for my existing AKS cluster and below is the sample yaml file that works and I can access application using LB public IP :
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
namespace: asp-test
labels:
app: asp-frontend
spec:
selector:
matchLabels:
app: asp-frontend
template:
metadata:
labels:
app: asp-frontend
spec:
containers:
- name: aspnetapp
image: "mcr.microsoft.com/dotnet/core/samples:aspnetapp"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aspnetapp-load
namespace: asp-test
labels:
app: asp-frontend
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: mc_asp-onef-dev_rg_asp_aks_eastus2
spec:
loadBalancerIP: 10.10.10.10
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: asp-frontend
==================
Now I would like to use AGIC instead of LB and I am just adding below section in the file but I get "502 bad gateway" error. My AKS and AG vnets are peered. I don't have NSG to block connection. Deployment is successful and pods are running. I can access it using LB IP but not using AGIC.
I have tried editing this file and use normal AKS service instead of LB but I still get same error.
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aspnetapp
namespace: asp-test
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: aspnetapp-load
servicePort: 80
Related
I am facing the 502 Bad gateway issue in my Application Gateway.
I am using Azure Kubernetes Service to deploy my cluster which is connected to Ingress Application Gateway.
Configuration Files:
kube-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
namespace: en02
labels:
app: myApp
spec:
selector:
matchLabels:
app: myApp
replicas: 1
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: somecr.azurecr.io/myApp:1.0.0.30
resources:
limits:
memory: "64Mi"
cpu: "100m"
ports:
- containerPort: 5100
env:
- name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
value: "Microsoft.AspNetCore.ApplicationInsights.HostingStartup"
- name: "ApplicationInsights__ConnectionString"
value: "myKey"
---
apiVersion: v1
kind: Service
metadata:
namespace: en02
name: myApp
spec:
selector:
app: myApp
ports:
- port: 30153
targetPort: 5100
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: en02
name: etopia
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/health-probe-path: "/api/home"
spec:
rules:
- http:
paths:
- path: /myApp/
backend:
service:
name: myApp
port:
number: 30153
pathType: Exact
Result of
kubectl describe ingress -n en02
Name: ingress
Labels: <none>
Namespace: en02
Address: public-ip
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/myApp/ myApp:30153 (10.0.0.106:5100)
Annotations: appgw.ingress.kubernetes.io/health-probe-path: /api/home
kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
I am getting expected results from 10.0.0.106:5100/api/home and Application Gateway health status is 200.
No matter what I do, I always get Bad Gateway error, I was able to access a sample app on port 80 (where the ingress path was /) but if I specify anything in ingress path (/cashify/) it always give me bad gateway.
I tried adding readinessProbe to container but it doesn't work (However I am already getting 200 under application gateway health status).
Please help.
Please check if below can be worked around.
Please try to update deployment yaml to use wild card path specification to access apis with different paths.
deployment.yml
apiVersion: extensions/v1beta1
kind: Ingress
....
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- host: xxx
http:
paths:
- path: /api/* #wild card path
backend:
serviceName: apiservice
servicePort: 80
- backend:
.....
servicePort: 80
Note from MS docs: If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a
different status code as Healthy, configure a custom probe and
associate it with the HTTP settings.
As you said you have defined readinessProbe , please check if path of those probes is correct.
Check same with 1. livenessProbe 2. readinessProbe
Also please note that readinessProbe and livenessProbe are supported when configured with httpGet.
References:
bad request - path based routing · kubernetes-ingress · GitHub
application-gateway-troubleshooting-502
I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation
Problem : I want to implement stickiness on request header called "ORDER_ID". So that if specific order request should be served by specific pod in Kubernetes. But in my case it is not working as requests are not getting stick to specific pod instead getting distributed to different pods as well.
This is how i have installed Ambassador :
helm repo add datawire https://www.getambassador.io
kubectl create namespace ambassador
helm install ambassador --namespace ambassador datawire/ambassador
Below are yaml files :
Deployment.yml
apiVersion: apps/v1
kind: Deplyment
metadata:
name: order-service
spec:
replicas: 2
selector:
matchLabels:
app: order-service
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: orderservice
image: xyz.io/orderservice
imagePullPolicy: Always
ports:
- containerPort: 3000
2 service.yml
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
ports:
- protocol: TCP
port: 3000
targetPort: 3000
selector:
app: order-service
3 mapping.yml
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: ambassador-backend
spec:
prefix: /
service: order-service:3000
resolver: endpoint
load_balancer:
policy: least_request
header: ORDER_ID
Testing by -
curl --insecure --location --request GET 'http://.../backend/orders' --header 'Content-Type: application/json' --header 'ORDER_ID: 1234'
Is there something which i am missing or doing something wrong?
To setup stickiness, It needs ingress controller to route request to order-service through Ambassador Load Balancer. Also EndpointResolver is needed for Endpoint level Service Discovery which is being used in mapping for overriding the Ambassador's configuration.
The missing parts are -
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: ambassador
name: order-service-ingress
spec:
rules:
- http:
paths:
- path: /v1/orders/*
backend:
serviceName: order-service
servicePort: 4000
endpoint_resolver.yml
apiVersion: getambassador.io/v2
kind: KubernetesEndpointResolver
metadata:
name: endpoint
I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
I am working with aks service. Started with a tutorial on Azure that deploys Azure Voting app.
Then I created my app. It is a Restful service. created a container image. Now when I deploy my service, the public service end point is not accessible. Not only the app does not respond, traceroute command take me to msdn network but not to the ip address, neither is it pingable.
Here is the tutorial URL from which I took the sample for the front end deployment and service yaml and works fine.
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal
Here is my yaml. What am I doing wrong.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bwce-simplerest
spec:
replicas: 1
selector:
matchLabels:
app: bwce-simplerest
template:
metadata:
labels:
app: bwce-simplerest
spec:
containers:
- name: bwce-simplerest
image: tauqirghani/simplerest:1.0
ports:
- containerPort: 7070
---
apiVersion: v1
kind: Service
metadata:
name: bwce-simplerest
spec:
type: LoadBalancer
ports:
- port: 80
protocol: "TCP"
targetPort: 7070
selector:
app: bwce-simplerest