Azure k8s pods to cluster external communication via internal IPs - azure

I am migrating from GCP to Azure. My use case is simple: I have a k8s cluster running some web crawlers which needs to talk to Elastic and Cassandra clusters (not in the k8s cluster) using internal IPs. All of these components can be in the same Azure Region (e.g: East US). I understand from this discussion that VNET peering is the way to go.
This solution did not work for me. I am still unable to reach my Cass/ES cluster from the pods. I believe this solution is outdated, is there some other approach to accomplish this, that I am missing ?

We can use Azure route table to achieve that, then we can ping pod IP address out of your k8s Vnet.
I have answered your another question here, please check it.
If you would like further assistance, please let me know:)

Related

Kubernetes multi-clusters pod peering on Azure

How to configure routing between Pods on multiple Azure Kubernetes clusters?
Something similar to ip-alias/vpc-native on Google Cloud
In AKS, I think you can create the AKS clusters in the different subnets in the same vnet with an advanced network. For more details about the network, see Choose the appropriate network model. But it's not a perfect solution and there are some limitations for it. For example, the AKS clusters should be in the same region as the vnet.
You can take a look at Gotchas & Solutions Running a Distributed System Across Kubernetes Clusters. As it said, it's hard to communicate from different regions and nothing has yet solved the problem of running a distributed system that spans multiple clusters. It’s still a very hard experience that isn’t really documented.
So, maybe there would be a long time to wait for the perfect solution.

integrate azurerm_application_gateway with AKS with terraform

I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.
Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.
Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.

How do I configure my DNS to work with Rancher 2.0 ingress?

I'm new to Kubernetes and Rancher, but have a cluster setup and a workload deployed. I'm looking at setting up an ingress, but am confused by what my DNS should look like.
I'll keep it simple: I have a domain (example.com) and I want to be able to configure the DNS so that it's routed through to the correct IP in my 3 node cluster, then to the right ingress and load balancer, eventually to the workload.
I'm not interested in this xip.io stuff as I need something real-world, not a sandbox, and there's no documentation on the Rancher site that points to what I should do.
Should I run my own DNS via Kubernetes? I'm using DigitalOcean droplets and haven't found any way to get Rancher to setup DNS records for me (as it purports to do for other cloud providers).
It's really frustrating as it's basically the first and only thing you need to do... "expose an application to the outside world", and this is somehow not trivial.
Would love any help, or for someone to explain to me how fundamentally dumb I am and wha tI'm missing!
Thanks.
You aren't dumb, man. This stuff gets complicated. Are you using AWS or GKE? Most methods of deploying kubernetes will deploy an internal DNS resolver by default for intra-cluster communication. These URLs are only useful inside the cluster. They take the form of <service-name>.<namespace>.svc.cluster.local and have no meaning to the outside world.
However, exposing a service to the outside world is a different story. On AWS you may do this by setting the service's ServiceType to LoadBalancer, where kubernetes will automatically spin up an AWS LoadBalancer, and along with it a public domain name, and configure it to point to the service inside the cluster. From here, you can then configure any domain name that you own to point to that loadbalancer.

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

How to deploy AKS (Azure container service) in a VPN?

I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?
In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only
So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet

Resources