How to change kubernetes api-server flags [AKS] [Kubernetes 1.8] - azure

I am trying to setup horizontal pod autoscaling using custom-metrics. For support of custom metrics in kuberenetes 1.8.1, I need to enable the aggregation layer by setting the following flags in kube-apiserver:
--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
The kubernetes documentation does not contains any information for how to set these flags in api-server and controller manager. I am using azure kubernetes service (AKS).
Not sure but I think one of the possible way to set these flags could be by editing the yaml of kube-apiserver-xxx pod but when I run:
kubectl get po -n kube-system
I get no pod for kube-apiserver neither for kube controller manager.
What is the possible way to set these flags in aks?
I also deployed prometheus adapter for custom metrics but the pod logs showed me the following error:
panic: cluster doesn't provide requestheader-client-ca-file
You can see the exact requirements in configuration section in this link.
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

AKS now supports aggregated API - you can find specific scaling details in the following GitHub comment # https://github.com/Azure/AKS/issues/77#issuecomment-352926551. Run "az aks upgrade" even to the same Kubernetes version and AKS will update the control plane with the required certificates on the backend.

Support for the aggregation layer has been added a couple weeks ago, so no configuration should be necessary for a new cluster. Please see details here: https://github.com/Azure/AKS/issues/54

Related

azure devops with self Hosted agent : can't deploy to aks cluster

i want to create azure devops release pipeline that build a docker image and deploy it to aks cluster .
the build and deployment to acr work well but the deployment to aks doesn't work.
this is the results after runing the pipeline :
and this is the error logs :
2023-01-08T22:20:48.7666031Z ##[section]Starting: deploy
2023-01-08T22:20:48.7737773Z ==============================================================================
2023-01-08T22:20:48.7741356Z Task : Deploy to Kubernetes
2023-01-08T22:20:48.7745738Z Description : Use Kubernetes manifest files to deploy to clusters or even bake the manifest files to be used for deployments using Helm charts
2023-01-08T22:20:48.7750005Z Version : 0.212.0
2023-01-08T22:20:48.7752721Z Author : Microsoft Corporation
2023-01-08T22:20:48.7755489Z Help : https://aka.ms/azpipes-k8s-manifest-tsg
2023-01-08T22:20:48.7757618Z ==============================================================================
2023-01-08T22:20:49.2976400Z Downloading: https://storage.googleapis.com/kubernetes-release/release/stable.txt
2023-01-08T22:20:49.8627101Z Found tool in cache: kubectl 1.26.0 x64
2023-01-08T22:20:50.6940515Z ==============================================================================
2023-01-08T22:20:50.6942077Z Kubectl Client Version: v1.26.0
2023-01-08T22:20:50.6943172Z Kubectl Server Version: v1.23.12
2023-01-08T22:20:50.6944430Z ==============================================================================
2023-01-08T22:20:50.7161602Z [command]/azp/_work/_tool/kubectl/1.26.0/x64/kubectl apply -f /azp/_work/_temp/Deployment_acrdemo2ss-deployment_1673216450713,/azp/_work/_temp/Service_acrdemo2ss-loadbalancer-service_1673216450713 --namespace dev
2023-01-08T22:20:50.9679948Z Unable to connect to the server: dial tcp: lookup tfkcluster-dns-074e9373.hcp.canadacentral.azmk8s.io on 192.168.1.1:53: no such host
2023-01-08T22:20:50.9771688Z ##[error]Unable to connect to the server: dial tcp: lookup tfkcluster-dns-074e9373.hcp.canadacentral.azmk8s.io on 192.168.1.1:53: no such host
2023-01-08T22:20:50.9809463Z ##[section]Finishing: deploy
this is my service connection :
Unable to connect to the server: dial tcp: lookup xxxx on
192.168.1.1:53: no such host
It appears that you are using a private cluster (The Private Cluster option is enabled while creating the AKS cluster).
Kubectl is a kubernetes control client. It is an external connectivity provider to connect with kubernetes cluster. We can't connect with the private cluster externally.
However, we can't disable this option after the cluster creation. We need to delete the cluster and create a new one with the option "Private Cluster" disabled.
Alternately, you can set up another self-hosted agent which will be in the same Vnet as the cluster and have access to AKS and the Azure Pipelines.
See Options for connecting to the private cluster
The API server endpoint has no public IP address. To manage the API
server, you'll need to use a VM that has access to the AKS cluster's
Azure Virtual Network (VNet). There are several options for
establishing network connectivity to the private cluster.
Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster.
Use a VM in a separate network and set up Virtual network peering. See the section below for more information on this option.
Use an Express Route or VPN connection.
Use the AKS command invoke feature.
Use a private endpoint connection.
Creating a VM in the same VNET as the AKS cluster is the easiest
option. Express Route and VPNs add costs and require additional
networking complexity. Virtual network peering requires you to plan
your network CIDR ranges to ensure there are no overlapping ranges.

Kubectl not working when AKS API authorized ranges are in place

We're implementing security on our k8s cluster in Azure (managed Kubernetes - AKS).
Cluster is deployed via ARM template, the configuration is as following:
1 node, availability set, Standard load balancer, Nginx-based ingress controller, a set of application ddeployed.
According to the document we've updated cluster to protect API server from the whole internet:
az aks update --resource-group xxxxxxxx-xxx-xx-xx-xx-x -n xx-xx-xxx-aksCluster
--api-server-authorized-ip-ranges XX.XX.X.0/24,XX.XX.X.0/24,XX.XXX.XX.0/24,XX.XXX.XXX.XXX/32
--subscription xxxxx-xxx-xxx-xxx-xxxxxx
Operation is completed successfully.
When trying to grab logs from the pod the follwoing error is occured:
kubectl get pods -n lims-dev
NAME READY STATUS RESTARTS AGE
XXXX-76df44bc6d-9wdxr 1/1 Running 0 14h
kubectl logs XXXXX-76df44bc6d-9wdxr -n lims-dev
Error from server: Get https://aks-agentpool-XXXXXX-1:10250/containerLogs/XXXX/XXXXX-
76df44bc6d-9wdxr/listener: dial tcp 10.22.0.35:10250: i/o timeout
When trying to deploy using Azure DevOps, the same error is raised:
2020-04-07T04:49:49.0409528Z ##[error]Error: error installing:
Post https://xxxxx-xxxx-xxxx-akscluster-dns-xxxxxxx.hcp.eastus2.azmk8s.io:443
/apis/extensions/v1beta1/namespaces/kube-system/deployments:
dial tcp XX.XX.XXX.142:443: i/o timeout
Of course, the subnet where I'm running the kubectl is added to authorized range.
I'm trying to understand what's the source of the problem.
You need also to specify --load-balancer-outbound-ips parameter once creating AKS cluster. This IP will be used by your pods to communicate to external world, as well as to AKS API server. See here

How to deploy a second Load Balancer for istio 1.5.1 on Azure

I would need to deploy a second Azure Load Balancer for ingress gateway of an app (to be separated from the main Load Balancer deployed in the istio's default profile).
I have tried the suggestions on GitHub (https://github.com/istio/istio/issues/19263). However, the result was actually an additonal Frontend IP Configuration for the main Load Balancer, and not an additional Load Balancer. This ends up with "ERR_SSL_PROTOCOL_ERROR" error (if curl is used: error:1408F10B:SSL routines:ssl3_get_record:wrong version number), if the same port 443 is used in both istio ingress gateways.
istio version: 1.5.1
Any suggestions on how to deploy an additional Load Balancer for the second ingress gateway? Thanks
This is a tricky configuration as it needs to have an entire new second istio ingress gateway (not just a gateway object). There is an article about this here.
This approach creates new HorizontalPodAutoscaler, Deployment, Gateway, PodDisruptionBudget, Service, ServiceAccount for the second istio ingress gateway based on the default configuration.
After modifying all the names labels You can kubectl apply the manifest to Your istio cluster . As for the Loadbalancer, new one will be attached to new istio-ingress gateway automatically.
Hope it helps.

Kubernetes: Kube-DNS vs. CoreDNS

I am new to Kubernetes and looking for a better understanding of the difference between Kube-DNS and CoreDNS.
As I understand it the recommendation is to use the newer CoreDNS rather than the older Kube-DNS.
I have setup a small cluster using kubeadm and now I am a little confused about the difference between CoreDNS and Kube-DNS.
Using kubectl get pods --all-namespaces I can see that I have two CoreDNS pods running.
However using kubectl get svc --all-namespaces I also see that I have a service named kube-dns running in the kube-system namespace. When I inspect that with kubectl describe svc/kube-dns -n kube-system I can see that the kube-dns service links to coredns.
I am now wondering if I am actually running both kube-dns and coredns. Or else, why is that service called kube-dns and not core-dns?
I have K8S 1.12. Do a describe of the dns pod.
kubectl describe pod coredns-576cbf47c7-hhjrs --namespace=kube-system | grep -i "image:"
Image: k8s.gcr.io/coredns:1.2.2
Looks like coredns is running. According to the documentation CoreDNS is default from K8S 1.11. For previous installations it's kube-dns.
The image is what important, rest are metadata (names, labels etc).
According to the K8S blog here.
In Kubernetes 1.11, CoreDNS has reached General Availability (GA) for DNS-based service discovery, as an alternative to the kube-dns addon. This means that CoreDNS will be offered as an option in upcoming versions of the various installation tools. In fact, the kubeadm team chose to make it the default option starting with Kubernetes 1.11.
Also, see this link for more info.

acs-engine with custom vnet dns: error server misbehaving

With acs-engine I have created a k8s cluster with a custom vnet. The cluster was deployed and the pods are running.
When I do a kubectl get nodes or get pod I get a reply. But when I use exec to get into a pod or use helm install then I get the error:
Error from server: error dialing backend: dial tcp: lookup k8s-agentpool on 10.40.1.133:53: server misbehaving
I used the following json file to create the arm templates:
acs-engine.json
When not using a custom vnet then the default azure dns is used and with a custom vnet our own dns servers are used. Is the only option to register all masters and agents to the dns server?
Resolved it by adding all cluster nodes to our dns servers

Resources