Azure internal load balancer with Azure Kubernetes Service not working - azure

I am trying to connect to internal load balancer using the below link:
https://learn.microsoft.com/en-us/azure/aks/internal-lb
I see a non existing user in error message I am receiving:
Warning CreatingLoadBalancerFailed 3m (x7 over 9m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service default/azure-vote-front: network.SubnetsClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '91c18461-XXXXXXXX---1441d7bcea67' with object id '91c18461-XXXXXXXXX-1441d7bcea67' does not have authorization to perform action 'Microsoft.Network/virtualNetworks/subnets/read' over scope '/subscriptions/996b68c3-ec32-46d4-8d0e-80c6da2c1a3b/resourceGroups/<<resource group>>/providers/Microsoft.Network/virtualNetworks/<<VNET>>/subnets/<<subnet id>>
When I search this user in my azure subscription, I do not find it.
Any help shall be highly appreciated
Below is my manifest file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: phishbotstagingregistry.azurecr.io/azure-vote-front:v1
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front

When you created AKS you provided wrong credentials (or stripped permissions later). So the service principal AKS is not authorized to create that resource (which the error clearly states).
Code="AuthorizationFailed" Message="The client
'91c18461-XXXXXXXX---1441d7bcea67' with object id
'91c18461-XXXXXXXXX-1441d7bcea67' does not have authorization to
perform action 'Microsoft.Network/virtualNetworks/subnets/read' over
scope
'/subscriptions/996b68c3-ec32-46d4-8d0e-80c6da2c1a3b/resourceGroups/<>/providers/Microsoft.Network/virtualNetworks/<>/subnets/<>
You can use az aks list --resource-group <your-resource-group> to find your service principal, but the error kinda gives that away.

Related

How to configure Azure Application Gateway Ingress Controller (AGIC) yaml

I need help in AGIC configuration. I am using Loadbalancer service for my existing AKS cluster and below is the sample yaml file that works and I can access application using LB public IP :
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
namespace: asp-test
labels:
app: asp-frontend
spec:
selector:
matchLabels:
app: asp-frontend
template:
metadata:
labels:
app: asp-frontend
spec:
containers:
- name: aspnetapp
image: "mcr.microsoft.com/dotnet/core/samples:aspnetapp"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aspnetapp-load
namespace: asp-test
labels:
app: asp-frontend
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: mc_asp-onef-dev_rg_asp_aks_eastus2
spec:
loadBalancerIP: 10.10.10.10
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: asp-frontend
==================
Now I would like to use AGIC instead of LB and I am just adding below section in the file but I get "502 bad gateway" error. My AKS and AG vnets are peered. I don't have NSG to block connection. Deployment is successful and pods are running. I can access it using LB IP but not using AGIC.
I have tried editing this file and use normal AKS service instead of LB but I still get same error.
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aspnetapp
namespace: asp-test
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: aspnetapp-load
servicePort: 80

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

Application run on Azure AKS Kubernates can not be accessed from App Service

I have configured two different applications ( SEQ and MockServer ) on Azure AKS service. They are both working correctly from internet but can not access them from Azure Web Service. It also can not be accessed from Azure CLI.
Below my
apiVersion: apps/v1
kind: Deployment
metadata:
name: mockserver-deployment
labels:
app: mockserver
spec:
replicas: 1
selector:
matchLabels:
app: mockserver
template:
metadata:
labels:
app: mockserver
spec:
containers:
- name: mockserver
image: jamesdbloom/mockserver
env:
- name: LOG_LEVEL
value: "INFO"
ports:
- containerPort: 1080
imagePullSecrets:
- name: my-secret
---
kind: Service
apiVersion: v1
metadata:
name: mockserver-service
spec:
selector:
app: mockserver
loadBalancerIP: 51.136.53.26
type: LoadBalancer
loadBalancerSourceRanges:
# from Poland
- 62.87.152.154/32
- 83.30.150.205/32
- 80.193.73.114/32
- 195.191.163.0/24
# from AppCenter test
- 195.249.159.0/24
- 195.0.0.0/8
# from Marcin K home
- 95.160.157.0/24
- 93.105.0.0/16
ports:
- port: 1080
targetPort: 1080
name: mockserver
The best approach is to use VNET integration for your AppService (https://learn.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet) combined with an internal LoadBalancer-type Service (https://learn.microsoft.com/en-us/azure/aks/internal-lb). This way the communication between the app service and AKS will flow only via the internal VNET. Note that you can have also an external LB service like the one you already have; you can have multiple services serving traffic to the same set of pods.

How to get a external IP of VM running Kubernet Services

I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

Azure Kubernetes Services 502 Bad Gateway

I have AKS cluster up and running and on a heavy user load I get some 502 bad gateway responses. This only happens when the request load is high. I used the Azure DevOps load testing to achieve this behavior. I believe that it has something to do with the Load Balancer timeouts but I am not too sure how to go about debugging this. Perhaps I should be checking logs somehwere? Searching around google tells me that i should be checking nginx logs but not sure where to find those. Sorry I am newbie in kubernettes world.
These are all the pods that are in the cluster. apsever-api-... are my actual apps that serve the request:
The YAML file used to generate this:
# DS for AP
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: apserver-api
spec:
updateStrategy:
type: RollingUpdate
selector:
template:
metadata:
labels:
app: apserver-api
spec:
containers:
- name: apserver-api
image: IMAGE
env:
- name: APP_SVC
value: apserver-api
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
# Service for AP
kind: Service
apiVersion: v1
metadata:
labels:
app: apserver-api
name: apserver-api
spec:
type: ClusterIP
ports:
- name: http
port: 80
- name: https
port: 443
targetPort: 80
selector:
app: apserver-api
type: "LoadBalancer"
and screenshot of the load test:

Resources