azure devops with self Hosted agent : can't deploy to aks cluster - azure

i want to create azure devops release pipeline that build a docker image and deploy it to aks cluster .
the build and deployment to acr work well but the deployment to aks doesn't work.
this is the results after runing the pipeline :
and this is the error logs :
2023-01-08T22:20:48.7666031Z ##[section]Starting: deploy
2023-01-08T22:20:48.7737773Z ==============================================================================
2023-01-08T22:20:48.7741356Z Task : Deploy to Kubernetes
2023-01-08T22:20:48.7745738Z Description : Use Kubernetes manifest files to deploy to clusters or even bake the manifest files to be used for deployments using Helm charts
2023-01-08T22:20:48.7750005Z Version : 0.212.0
2023-01-08T22:20:48.7752721Z Author : Microsoft Corporation
2023-01-08T22:20:48.7755489Z Help : https://aka.ms/azpipes-k8s-manifest-tsg
2023-01-08T22:20:48.7757618Z ==============================================================================
2023-01-08T22:20:49.2976400Z Downloading: https://storage.googleapis.com/kubernetes-release/release/stable.txt
2023-01-08T22:20:49.8627101Z Found tool in cache: kubectl 1.26.0 x64
2023-01-08T22:20:50.6940515Z ==============================================================================
2023-01-08T22:20:50.6942077Z Kubectl Client Version: v1.26.0
2023-01-08T22:20:50.6943172Z Kubectl Server Version: v1.23.12
2023-01-08T22:20:50.6944430Z ==============================================================================
2023-01-08T22:20:50.7161602Z [command]/azp/_work/_tool/kubectl/1.26.0/x64/kubectl apply -f /azp/_work/_temp/Deployment_acrdemo2ss-deployment_1673216450713,/azp/_work/_temp/Service_acrdemo2ss-loadbalancer-service_1673216450713 --namespace dev
2023-01-08T22:20:50.9679948Z Unable to connect to the server: dial tcp: lookup tfkcluster-dns-074e9373.hcp.canadacentral.azmk8s.io on 192.168.1.1:53: no such host
2023-01-08T22:20:50.9771688Z ##[error]Unable to connect to the server: dial tcp: lookup tfkcluster-dns-074e9373.hcp.canadacentral.azmk8s.io on 192.168.1.1:53: no such host
2023-01-08T22:20:50.9809463Z ##[section]Finishing: deploy
this is my service connection :

Unable to connect to the server: dial tcp: lookup xxxx on
192.168.1.1:53: no such host
It appears that you are using a private cluster (The Private Cluster option is enabled while creating the AKS cluster).
Kubectl is a kubernetes control client. It is an external connectivity provider to connect with kubernetes cluster. We can't connect with the private cluster externally.
However, we can't disable this option after the cluster creation. We need to delete the cluster and create a new one with the option "Private Cluster" disabled.
Alternately, you can set up another self-hosted agent which will be in the same Vnet as the cluster and have access to AKS and the Azure Pipelines.
See Options for connecting to the private cluster
The API server endpoint has no public IP address. To manage the API
server, you'll need to use a VM that has access to the AKS cluster's
Azure Virtual Network (VNet). There are several options for
establishing network connectivity to the private cluster.
Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster.
Use a VM in a separate network and set up Virtual network peering. See the section below for more information on this option.
Use an Express Route or VPN connection.
Use the AKS command invoke feature.
Use a private endpoint connection.
Creating a VM in the same VNET as the AKS cluster is the easiest
option. Express Route and VPNs add costs and require additional
networking complexity. Virtual network peering requires you to plan
your network CIDR ranges to ensure there are no overlapping ranges.

Related

Establish Connection between AzureEventHub & AzureKubernetes

I have AzureEventHub setup and its opened for a selected network, which means, only the IPs that are whitelisted can be accessible.
On the other end, i have AzureKubernetesService configured to read messages from AzureEventHub. I get connection error saying broker not available, because the IPs of kubernetes is not whitelisted in eventhub.
Question is : Where would i get the IPs of AKS that can be configured in my AzureEventHub ?
What is already tried : In Overview of AKS Cluster, we have certain IPs as below.
Pod CIDR
Service CIDR
DNS service IP
Docker bridge CIDR
Adding the above isn't working !!!
You need to register the Azure Kubernetes Service (AKS) in Azure Event Grid. This can be done either through CLI or Powershell.
There is a set of commands you need to perform to complete the task:
Enable the Event Grid resource provider
Create an AKS cluster using the az aks create command
Create a namespace and event hub using az eventhubs namespace create and az eventhubs eventhub create
Refer the official document to implement by using the given examples

Kubernetes pod failed to connect to external service

I have an Azure Kubernetes Cluster Running with Azure CNI (virtual network) as the Network. The cluster is running on 1 subnet of the network.
On another subnet, I have a Virtual Machine running as it has a private IP of 10.1.0.4.
Now I have a pod in the K8S cluster, which is trying to connect with the Virtual Machine. But it's not able to do so.
Also, the ping 10.1.0.4 from inside the pod gives a timeout.
Please help me to figure out, what I am doing wrong so that I can connect the Pod with the VM.
• You cannot directly create communication between an AKS cluster pod and a Virtual Machine as the IP assigned to a pod/node in an AKS cluster is a subset range of the address space of the higher CIDR IP address range assigned while deploying the cluster. And communication within the cluster between the nodes is uninterrupted and possible readily. But the same with resources other AKS is restricted as they are governed by Azure CNI framework policy which directs the Kubernetes cluster to direct traffic outbound of the cluster in a regulated and conditional way.
• Thus, the above said can only be achieved by initiating intermediate services such as an internal load balancer between the AKS and the VMs as the CIDR of the VM and the AKS is different. So, leveraging the Azure plugin to deploy an internal load balancer as a service through AKS is only way through which you can achieve communication between AKS pod and a VM deployed in Azure. Below is a diagram for illustration purposes.
To deploy the internal load balancer through YAML files in AKS for external communication with VMs, kindly refer to the link below for details: -
https://fabriciosanchez-en.azurewebsites.net/implementing-virtual-machine-to-pod-communication-in-azure-kubernetes-service-aks/

Azure pods app connect to MSSQL server installed in Azure VM

I have AZ VM window which installed MSSQL
I created cluster in AZ K8S, then create the pod with image - application run with embedded tomcat using MSSQL connect to private IP of the above VM. The container in the pod start with error: can not connect to that private IP of MSSQL
I can access to that private IP from my local machine (using VPN), so is there any way/config to make pod can connect to that VM using private IP ? since it's same infrastructure, I dont why it cannot connect
(I am newbie with Azure)
Thanks alot
For your requirement, I don't know how do you deploy the VM and the AKS cluster. So I give the solutions for the two situations:
AKS cluster with the network type kubelet:
VM in VNet A and AKS in VNet B
create a service with the internal load balancer for the pod, and then peer the VNet A and B
VM and AKS in the same VNet
create a service with the internal load balancer for the pod
AKS cluster with the network type CNI
VM in VNet A and AKS in VNet B
peer the VNet A and B
VM and the AKS in the same VNet
you don't do anything, it should work
All of the above solutions need you to check the NSG rules between the VM and the AKS cluster pod. You need to allow the inbound traffic to the VM with the MSSQL port.

Kubernetes: Connect to Azure SQL

I have hosted my SQL on Azure SQL.
From my AKS, each of the pods, I found out it is not able to connect to Azure SQL.
DB Connection:
Data Source=tcp:dbname.database.windows.net,1433;Initial Catalog=dbname;User Id={account};Password={password}
In Azure Portal > I have enable this below
I double checked the connection string and is able to connect from my local machine, but inside the kubenetes pod, I try to perform telnet to the server it responds
Connection closed by foreign host.
May I know what going wrong on this.
Azure provides two options for pods running on an AKS worker nodes to access a MySQL or PostgreSQL DB instance:
Create a firewall rule on the Azure DB Server with a range of IP addresses that encompasses all IPs of the AKS Cluster nodes (this can be a very large range if using node auto-scaling).
Create a VNet Rule on the Azure DB Server that allows access from the subnet the AKS nodes are in. This is used in conjunction with the Microsoft.Sql VNet Service Endpoint enabled on the cluster subnet.
VNet Rules are recommended and preferable in this situation for several reasons. Nodes are often configured with dynamic IP addresses that can change when a node is restarted resulting in broken firewall rules that reference specific IPs. Nodes can be added to a cluster which would require updating the firewall rule to add additional IPs. VNet Rules avoid these issues by granting access to an entire subnet of AKS nodes.
Manual steps
Configuring a secure networking environment for AKS and Azure DB requires the following:
AKS cluster setup
ResourceGroup: a logical grouping a resources required for all resources.
VNet: creates a virtual network for the AKS cluster nodes.
Subnet has a range of private IPs for AKS cluster nodes
Create an AKS cluster using the above resources.
Configure managed service access
VNet Service Endpoint: update the cluster subnet above with a service endpoint for Microsoft.Sql to enable connectivity for new Azure DB service resource.
Provision managed services with private IPs on the cluster’s network
Provision managed Azure DB service instances: PostgreSQL, MySQL.
VNet Rule for each managed service instance to allow traffic from all nodes in the cluster subnet to a given Azure DB service instance (PostgreSQL, MySQL).
I have found the issue, basically the Issue is on the AKS getting the wrong configuration, For the Identity, It doesn't read the proper appsettings.json, which it should be point to /secrets/*.json
AddEntityFrameworkStores()
I change the code to retrieve the information from the correct secret, the apps is work now.
Sadhus answer is correct and secure. But first you can quickly check by enabling the traffic as follows.
First select your server from your resource group.
Now in your sql server enable "Allow Azure services and resources to access this serve"

acs-engine with custom vnet dns: error server misbehaving

With acs-engine I have created a k8s cluster with a custom vnet. The cluster was deployed and the pods are running.
When I do a kubectl get nodes or get pod I get a reply. But when I use exec to get into a pod or use helm install then I get the error:
Error from server: error dialing backend: dial tcp: lookup k8s-agentpool on 10.40.1.133:53: server misbehaving
I used the following json file to create the arm templates:
acs-engine.json
When not using a custom vnet then the default azure dns is used and with a custom vnet our own dns servers are used. Is the only option to register all masters and agents to the dns server?
Resolved it by adding all cluster nodes to our dns servers

Resources