Establish Connection between AzureEventHub & AzureKubernetes - azure

I have AzureEventHub setup and its opened for a selected network, which means, only the IPs that are whitelisted can be accessible.
On the other end, i have AzureKubernetesService configured to read messages from AzureEventHub. I get connection error saying broker not available, because the IPs of kubernetes is not whitelisted in eventhub.
Question is : Where would i get the IPs of AKS that can be configured in my AzureEventHub ?
What is already tried : In Overview of AKS Cluster, we have certain IPs as below.
Pod CIDR
Service CIDR
DNS service IP
Docker bridge CIDR
Adding the above isn't working !!!

You need to register the Azure Kubernetes Service (AKS) in Azure Event Grid. This can be done either through CLI or Powershell.
There is a set of commands you need to perform to complete the task:
Enable the Event Grid resource provider
Create an AKS cluster using the az aks create command
Create a namespace and event hub using az eventhubs namespace create and az eventhubs eventhub create
Refer the official document to implement by using the given examples

Related

Azure AKS Load Balancer issue with Azure Network CNI plugin not accessible

I am deploying an API application on an existing application AKS cluster which uses the Azure CNI plugin. The deployment manifest are native k8s with kustomize. The resources getting deployed are API deployment with an NGINX Ingress controller and couple ingress routes for API itself and grafana and prometheus(through prom operator). I have on 1 ingress route added so far which just for accessing the API.
When I deploy the resources all of them get successfully deployed and a Public IP get assigned to the controller. However, when I try to hit public IP to fetch the response for the endpoint. And I am looking for some help to troubleshoot the issue.
After looking at setup a little but I realized a couple of things:
Load Balancer's resource group and the nodes in the agent pools have different resource groups.
The NSG inbound and outbound rules are not in the same resource group.
I am not sure what piece is missing. I tried changing the resource group of the load balancer it didn't do that failing with a validation error. I also ran the same setup with the default kubenet network plugin and it worked successfully. Any help on this will be greatly appreciated.
Q1: Load Balancer's resource group and the nodes in the agent pools have different resource groups.
The Azure AKS is an individual resource, but its components are not. You need to create the AKS cluster in a resource group. When creating the AKS cluster, Azure will create another resource group to put the components of the AKS. So there will be two resource groups when the AKS is created. Here are the details to help you better understand it.
And I guess you want to assign a static public IP address to the Ingress controller and create the static public IP in the resource group which the AKS resource in, not the components. If I'm right, then you need to assign the service principal of the AKS with a network role. Here are the steps that how to create a static public IP in another resource group.
Q2: The NSG inbound and outbound rules are not in the same resource group.
You do not need to care about the NSG inbound and outbound rules for AKS, Azure manage them itself. You just need to focus on the things that how to deploy applications in the AKS cluster.

Kubernetes: Connect to Azure SQL

I have hosted my SQL on Azure SQL.
From my AKS, each of the pods, I found out it is not able to connect to Azure SQL.
DB Connection:
Data Source=tcp:dbname.database.windows.net,1433;Initial Catalog=dbname;User Id={account};Password={password}
In Azure Portal > I have enable this below
I double checked the connection string and is able to connect from my local machine, but inside the kubenetes pod, I try to perform telnet to the server it responds
Connection closed by foreign host.
May I know what going wrong on this.
Azure provides two options for pods running on an AKS worker nodes to access a MySQL or PostgreSQL DB instance:
Create a firewall rule on the Azure DB Server with a range of IP addresses that encompasses all IPs of the AKS Cluster nodes (this can be a very large range if using node auto-scaling).
Create a VNet Rule on the Azure DB Server that allows access from the subnet the AKS nodes are in. This is used in conjunction with the Microsoft.Sql VNet Service Endpoint enabled on the cluster subnet.
VNet Rules are recommended and preferable in this situation for several reasons. Nodes are often configured with dynamic IP addresses that can change when a node is restarted resulting in broken firewall rules that reference specific IPs. Nodes can be added to a cluster which would require updating the firewall rule to add additional IPs. VNet Rules avoid these issues by granting access to an entire subnet of AKS nodes.
Manual steps
Configuring a secure networking environment for AKS and Azure DB requires the following:
AKS cluster setup
ResourceGroup: a logical grouping a resources required for all resources.
VNet: creates a virtual network for the AKS cluster nodes.
Subnet has a range of private IPs for AKS cluster nodes
Create an AKS cluster using the above resources.
Configure managed service access
VNet Service Endpoint: update the cluster subnet above with a service endpoint for Microsoft.Sql to enable connectivity for new Azure DB service resource.
Provision managed services with private IPs on the cluster’s network
Provision managed Azure DB service instances: PostgreSQL, MySQL.
VNet Rule for each managed service instance to allow traffic from all nodes in the cluster subnet to a given Azure DB service instance (PostgreSQL, MySQL).
I have found the issue, basically the Issue is on the AKS getting the wrong configuration, For the Identity, It doesn't read the proper appsettings.json, which it should be point to /secrets/*.json
AddEntityFrameworkStores()
I change the code to retrieve the information from the correct secret, the apps is work now.
Sadhus answer is correct and secure. But first you can quickly check by enabling the traffic as follows.
First select your server from your resource group.
Now in your sql server enable "Allow Azure services and resources to access this serve"

Steps for deployment of Container Instance with Virtual Network

I'd like to automate the deployment of a virtual network (that is peered with another network) and container instance connected to that network.
I'd just want to confirm that I'd do the correct steps. I'll be using Azure REST API.
Deploy a Virtual Network with a subnet
Create a Peering to the other virtual network
Create a Network Profile
Deploy the Container with the created network profile.
Step 3 is a bit weird for me because it's different than what I do in the Azure Portal. In the Portal, I just select the virtual network that I want my container to be connected to. Looking at MSDN Docs it seems to me that REST API requires me to create that Network Profile first. Am I right?
When you deploy an container using az container create the az cli will create the network profile for you in the background.
This might be why you might not have seen explicit creation of the network profile before.
A network profile is a network configuration template for Azure resources. It specifies certain network properties for the resource, for example, the subnet into which it should be deployed. When you first use the az container create command to deploy a container group to a subnet (and thus a virtual network), Azure creates a network profile for you. You can then use that network profile for future deployments to the subnet.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet#network-profile
You steps are looking good.

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

Azure container services and Application gateway

I connected an Application Gateway as a frontend for the services in the kubernetes cluster. I created a subnet on the k8s-vnet-<id> with address space 10.0.0.0/29 and connected the Application Gateway to that subnet.
I followed instrucions from https://fizzylogic.nl/2017/06/16/how-to-connect-azure-api-management-to-your-kubernetes-cluster/
When I try to scale the ContainerServices I get the following error:
Operation name: Write VirtualNetworks
Error code: InUseSubnetCannotBeDeleted
Message:
Subnet api-gateway-subnet is in use by /subscriptions/cdf495e8-6232-4a61-a661-716fec93f8b5/resourceGroups/KuberGoPlay/providers/Microsoft.Network/applicationGateways/ngaz-appgw-play/gatewayIPConfigurations/appGatewayIpConfig and cannot be deleted.
Why is the container service trying to delete the subnet when it scales?
Or am I connecting the Application Gateway the wrong way ?
/Martin
Why is the container service trying to delete the subnet when it
scales?
When we try to scale up or scale down (update a resource) Azure container service, the request is processed by deleting and creating the resource.
You may encounter this error when attempting to update a resource, but
the request is processed by deleting and creating the resource. Make
sure to specify all unchanged values.
More information about InUseSubnetCannotBeDeleted, please refer to that link.
Here a article talk about how to use template to update resource, please refer to it.

Resources