We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.
Related
I have hosted my SQL on Azure SQL.
From my AKS, each of the pods, I found out it is not able to connect to Azure SQL.
DB Connection:
Data Source=tcp:dbname.database.windows.net,1433;Initial Catalog=dbname;User Id={account};Password={password}
In Azure Portal > I have enable this below
I double checked the connection string and is able to connect from my local machine, but inside the kubenetes pod, I try to perform telnet to the server it responds
Connection closed by foreign host.
May I know what going wrong on this.
Azure provides two options for pods running on an AKS worker nodes to access a MySQL or PostgreSQL DB instance:
Create a firewall rule on the Azure DB Server with a range of IP addresses that encompasses all IPs of the AKS Cluster nodes (this can be a very large range if using node auto-scaling).
Create a VNet Rule on the Azure DB Server that allows access from the subnet the AKS nodes are in. This is used in conjunction with the Microsoft.Sql VNet Service Endpoint enabled on the cluster subnet.
VNet Rules are recommended and preferable in this situation for several reasons. Nodes are often configured with dynamic IP addresses that can change when a node is restarted resulting in broken firewall rules that reference specific IPs. Nodes can be added to a cluster which would require updating the firewall rule to add additional IPs. VNet Rules avoid these issues by granting access to an entire subnet of AKS nodes.
Manual steps
Configuring a secure networking environment for AKS and Azure DB requires the following:
AKS cluster setup
ResourceGroup: a logical grouping a resources required for all resources.
VNet: creates a virtual network for the AKS cluster nodes.
Subnet has a range of private IPs for AKS cluster nodes
Create an AKS cluster using the above resources.
Configure managed service access
VNet Service Endpoint: update the cluster subnet above with a service endpoint for Microsoft.Sql to enable connectivity for new Azure DB service resource.
Provision managed services with private IPs on the cluster’s network
Provision managed Azure DB service instances: PostgreSQL, MySQL.
VNet Rule for each managed service instance to allow traffic from all nodes in the cluster subnet to a given Azure DB service instance (PostgreSQL, MySQL).
I have found the issue, basically the Issue is on the AKS getting the wrong configuration, For the Identity, It doesn't read the proper appsettings.json, which it should be point to /secrets/*.json
AddEntityFrameworkStores()
I change the code to retrieve the information from the correct secret, the apps is work now.
Sadhus answer is correct and secure. But first you can quickly check by enabling the traffic as follows.
First select your server from your resource group.
Now in your sql server enable "Allow Azure services and resources to access this serve"
I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.
Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.
Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.
Im trying to get web services to my existing service from aks managed cluster on azure. I did nsg port config stuff from portal to let outbound traffic go out and restarted vm several times. But my node cannot ping any ping on the internet. Im not trying to ping somewhere with its fqdn. Im trying it with its ip address. How can i reach a service from internet into my cluster?
How did you create the service and pod? Be default load balancer one will create all the ruls for you and you dont need to create the rules by yourself.
You can share your pod details
I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?
In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only
So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet
I am looking for a way to create a docker cluster (probably kubernetes) on azure, and expose the containers only via a vnet to my datacenter.
Is such a setup possible?
That is that the container services can only be access via the vpn that is created. So that the container can use private resources (mainly database) not available in the azure cloud?
And so that I can access the resources in the cloud, only from my dc.
Yes, that is perfectly possible. depending on your setup you need to deploy regular kubernetes cluster and use site-to-site VPN to connect networks or use ACS engine to deploy kubernetes into existing vnet\subnet.
You would also need to tweak your network security group rules to allow traffic to flow (if you have them).
https://github.com/Azure/acs-engine/tree/master/examples/vnet
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough
https://blogs.technet.microsoft.com/canitpro/2017/06/28/step-by-step-configuring-a-site-to-site-vpn-gateway-between-azure-and-on-premise/
I am looking for a way to create a docker cluster (probably
kubernetes) on azure, and expose the containers only via a vnet to my
datacenter.
Yes, we just create k8s pod, and not expose it to internet. Then create S2S VPN connect Azure Vnet to your DC, in this way, your DC's VMs can connect to Azure K8S pod via Azure private IP address.
Update:
If you want to connect your K8S pods via VPN, we can create Azure route table to achieve that.
More information about create route table, please refer to my another answer.