We are currently migrating from AWS ECS to Azure Kubernetes Service. Our first step is to first migrate the application code and just leave the database in AWS RDS, for now. Our RDS instance is protected by a security group which only allows connection from a set of IP addresses.
When connecting to the RDS instance, what IP address does the database see? How can I configure RDS to allow connection from a kubernetes pod?
If you have an Azure Load Balancer (so any kubernetes service with type LoadBalancer) attached to worker nodes - they will use the first IP attached to the Load Balancer. If not - they will use public IP attached to the VM they run on. If the VM doesnt have a public IP (default for AKS), they will use ethereal IP that might change anytime and you have no control over that.
So just create a service with the type of LoadBalancer in AKS, find its external IP address and whitelist that.
Related
I have an Azure Kubernetes Cluster Running with Azure CNI (virtual network) as the Network. The cluster is running on 1 subnet of the network.
On another subnet, I have a Virtual Machine running as it has a private IP of 10.1.0.4.
Now I have a pod in the K8S cluster, which is trying to connect with the Virtual Machine. But it's not able to do so.
Also, the ping 10.1.0.4 from inside the pod gives a timeout.
Please help me to figure out, what I am doing wrong so that I can connect the Pod with the VM.
• You cannot directly create communication between an AKS cluster pod and a Virtual Machine as the IP assigned to a pod/node in an AKS cluster is a subset range of the address space of the higher CIDR IP address range assigned while deploying the cluster. And communication within the cluster between the nodes is uninterrupted and possible readily. But the same with resources other AKS is restricted as they are governed by Azure CNI framework policy which directs the Kubernetes cluster to direct traffic outbound of the cluster in a regulated and conditional way.
• Thus, the above said can only be achieved by initiating intermediate services such as an internal load balancer between the AKS and the VMs as the CIDR of the VM and the AKS is different. So, leveraging the Azure plugin to deploy an internal load balancer as a service through AKS is only way through which you can achieve communication between AKS pod and a VM deployed in Azure. Below is a diagram for illustration purposes.
To deploy the internal load balancer through YAML files in AKS for external communication with VMs, kindly refer to the link below for details: -
https://fabriciosanchez-en.azurewebsites.net/implementing-virtual-machine-to-pod-communication-in-azure-kubernetes-service-aks/
currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.
I have hosted my SQL on Azure SQL.
From my AKS, each of the pods, I found out it is not able to connect to Azure SQL.
DB Connection:
Data Source=tcp:dbname.database.windows.net,1433;Initial Catalog=dbname;User Id={account};Password={password}
In Azure Portal > I have enable this below
I double checked the connection string and is able to connect from my local machine, but inside the kubenetes pod, I try to perform telnet to the server it responds
Connection closed by foreign host.
May I know what going wrong on this.
Azure provides two options for pods running on an AKS worker nodes to access a MySQL or PostgreSQL DB instance:
Create a firewall rule on the Azure DB Server with a range of IP addresses that encompasses all IPs of the AKS Cluster nodes (this can be a very large range if using node auto-scaling).
Create a VNet Rule on the Azure DB Server that allows access from the subnet the AKS nodes are in. This is used in conjunction with the Microsoft.Sql VNet Service Endpoint enabled on the cluster subnet.
VNet Rules are recommended and preferable in this situation for several reasons. Nodes are often configured with dynamic IP addresses that can change when a node is restarted resulting in broken firewall rules that reference specific IPs. Nodes can be added to a cluster which would require updating the firewall rule to add additional IPs. VNet Rules avoid these issues by granting access to an entire subnet of AKS nodes.
Manual steps
Configuring a secure networking environment for AKS and Azure DB requires the following:
AKS cluster setup
ResourceGroup: a logical grouping a resources required for all resources.
VNet: creates a virtual network for the AKS cluster nodes.
Subnet has a range of private IPs for AKS cluster nodes
Create an AKS cluster using the above resources.
Configure managed service access
VNet Service Endpoint: update the cluster subnet above with a service endpoint for Microsoft.Sql to enable connectivity for new Azure DB service resource.
Provision managed services with private IPs on the cluster’s network
Provision managed Azure DB service instances: PostgreSQL, MySQL.
VNet Rule for each managed service instance to allow traffic from all nodes in the cluster subnet to a given Azure DB service instance (PostgreSQL, MySQL).
I have found the issue, basically the Issue is on the AKS getting the wrong configuration, For the Identity, It doesn't read the proper appsettings.json, which it should be point to /secrets/*.json
AddEntityFrameworkStores()
I change the code to retrieve the information from the correct secret, the apps is work now.
Sadhus answer is correct and secure. But first you can quickly check by enabling the traffic as follows.
First select your server from your resource group.
Now in your sql server enable "Allow Azure services and resources to access this serve"
We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.
I want to set up a kubernetes cluster with a loadbalancer. Kubernetes will create a load balancer in azure and connect a public ip address with it.
But I don't want to make the api public, it should be exclusive for my api management service.
I tried to direct the load balancer into a vnet with the api service but I found nothing.
So I thought I could just limit the access to the public ip (a whitelist with only the incluced ip of my service) but I found nothing on the internet.
Is it possible to set such rule on a public ip or do I need some extra service for this problem?
With Kubernetes, assuming you have a service defined
Use the following commands:
kubectl get service
kubectl edit svc/<YOUR SERVICE>
change the type from LoadBalancer to ClusterIP
Now you can consume internally the service.