Internet connection from a pod inside an AKS cluster - azure

I am trying to send an http request from my pod to the outside, but it seems impossible.
I currently have implemented a loadbalancer with fixed IP, but so far I just tested connections to the service.
Is there any specific constraint for it? Is it possible to overcome the issue?

Your worker nodes where your pods live are probably in private subnets (it's good practice to keep them there) and if that is the case then it's not Kubernetes problem. You should setup NAT to allow outbound traffic. I'm not familiar with Azure, but you should also check other abstractions that control your traffic (like Security Group or NACLs in AWS)

Related

isolate pod connection without network policy

kindly I have one question
I want to isolate pod connection in one namespace, to actually not accepting any traffic apart from specified pod label that we can allow.
I came across about network policy in kubernetes, but the limitations is, my current production setup, the network policy is disabled, and to enable it, kubernetes will re-create all the nodes in nodepools, which something that not acceptable on production.
is there any other workaround for this matter, or maybe use istio?
but really need idea how to get it done.
ps : my existing cluster running on GKE
thank you

Create Azure Kubernetes ingress controller to limit 1 connection per pod

I'm using Azure Kubernetes Service and have a unique scenario where I want to allow only one connection per pod. I used the "advanced" networking option to set up my cluster such that each pod has its own internal IP address. The problem is, all of these pods are behind a public load balancer IP address, and the load balancer decides where to route the traffic.
I need to either A) set up a rule such that the load balancer only allows one connection per pod and routes new traffic to new pods, 1 per request, or B) set up an ingress controller to do the same. I think B) is the solution but I have no clear path on how to do this. I see that you can route by URL, but you'd have to set up a rule for each pod, which is definitely not a good idea. Is there any way to set up a rule that just limits 1 session per pod? Or some other method that works similarly.
Thanks.
This is a very good question. Based on solutions you suggested in the second part of your question, I would like to add my input here. However, it's not limited or possible only to use these, there are most effective advanced ways people are establishing connections to their pods.
A.) I am looking at how are you routing your traffic to your pods from a load balancer, in general each pod inside Kubernetes cluster by defaults get's its own ip. If we know this how you managing traffic flow from external world to each pod. I can add my answer to A part of possible solutions. But not advisable to go this method, because it is more likely your pod dies and a new pod with new ip might get created you need to manually route traffic to the newly created pod, which is why people opted for kubernetes rather than manually managing docker containers on a VM. But I might be wrong, you might be having different complex system it is debatable though.
B.) Like you said, and researched Ingress and Services is also a solution, unfortunately there are no ingress controller annotations available as of now that only limits one connection per pod, but like you said URL based would be one part of the solution but again as you already identified there will be a overhead with this way it is more like single service per single pod and a sub domain for each service. It is more like single deployment with a unique service associated with it and a unique service with unique subdomain. It's a complex setting but doable.
Edit Based on Comments (Removed HPA)
Based on the information you added I can suggest a different approach, but it is kinda wrong way of using kubernetes, but again it is debatable based on the kind of system you are planning to achieve. Run a proxy server (HAProxy, NGINX, or your fav) on it is own on one of the node and route traffic from the outside world to your pod directly with the internal ip of the pod in your proxy. And you can route based on number of connections, etc from the proxy config remember this is not your kubernetes pod, it's a standalone service your OS running. But caution when node dies pod dies, so is the ip address of the pod.
But this is something we shouldn't do, I am sure in couple of weeks or so you will get the bigger picture of K8s and it's moving parts, you might say this is wrong as there is lot of manual setup overhead.
Hope this is helpful.
I'm fairly new to the k8s world, but as I understand it you should be able to do this with the nginx.org/max-conns annotation in a Nginx Ingress Controller:
https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
That way you should be able to limit the number of connections to 1 per 'upstream' or pod.
I.e. the Load Balancer directs traffic to Nginx, Nginx proxies the traffic to pods with one concurrent request per pod.

When to use external LoadBalancer in K8s?

Explaining my confusion / lack of understanding
When reading about the external LoadBalancer in K8s, which is a cloud provider only feature, I don't quite understand when it should be used, as when one creates a Deployment K8s will do Round Robin load balancing on the pods in that Deployment.
So from my current understanding all one would need to do is make a NodeIP, and you have the equivalent of an external load balancer?
Or should I think of the LoadBalancer type as haproxy/nginx/Envoy, where one can do SSL, reverse proxy, and many other useful things?
My current guess is that the proper use of LoadBalancer is to add many NodeIP's, but I can't find anything to back that up.
Question
Can anyone explain when and why to use LoadBalancer and not just using the NodeIP?
For example, You want to deploy multiple applications in your cluster, say 10 apps.
You would like to access these 10 apps over internet. One way is to set those 10 application services as nodeport so you can access them from outside. For this to happen kubernetes opens 10 nodeports on each cluster node. This is a security risk.
In most of the enterprises where they work behind firewall in a closed network dont allow external traffic to/from any ports other than http/https ( 80/443 ).
One way is to set service type as Loadbalancer for each application service. So, to access 10 app, you will be provisioning 10 load balancers to access the app servers over http/https ports. Since loadbalancers are charged resources, economically it is not viable to have one load balancer for each service that you want to access over itnernet.
Is there a way to access all those 10 app services running inside kubernetes over single port. This is where ingress controller comes into picture.
Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet

What are the Advantages of using Kubernetes Ingress in Azure AKS

My understanding is that setting the Service type to LoadBalancer creates a new Azure Load Balancer and assigns an IP address to the Service. Does this mean that I can have multiple Services using port 80? If the app behind my Service (an ASP.NET Core app) can handle TLS and HTTPS why shouldn't I just use LoadBalancer's for any Service I want to expose to the internet?
What is the advantage of using an Ingress if I don't care about TLS termination (You can let Cloudflare handle TLS termination)? If anything, it slows things down by adding an extra hop for every request.
Update
Some answers below mention that creating load balancers is costly. It should be noted that load balancers on Azure are free but they do charge for IP addresses of which they give you five for free. So for small projects where you want to expose up to five IP addresses, it's essentially free. Any more than that, then you may want to look ad usign Ingress.
Some answers also mention extra complexity if you don't use Ingress. I have already mentioned that Cloudflare can handle TLS termination for me. I've also discovered the external-dns Kubernetes project to create DNS entries in Cloudflare pointing at the load balancers IP address? It seems to me that cutting out Ingress reduces complexity as it's one less thing that I have to configure and manage. The choice of Ingress is also massive, it's likely that I'll pick the wrong one which will end up unmaintained after some time.
There is a nice article here which describe the differences on Service(Load Balancer) and Ingress.
In summary, you can have multiple Service(Load Balancer) in the cluster, where each application is exposed independently from each other. The main issue is that each Load Balancer added will increase the cost of your solution, and does not have to be this way, unless you strictly need this.
If multiple applications listen on port 80, and they are inside the container, there is no reason you also need to map it to the port 80 in the host node. You can assign it to any port, because the Service will handle the dynamic port mappings for you.
The ingress is best in this scenario, because you can have one ingress listing on port 80, and route the traffic to the right service based on many variables, like:
Domain
Url Path
Query String
And many other
Ingress in not just for TLS termination, it is in simple terms a proxy\gateway that will control the routing to the right service, TLS termination is just one of the features.
No, you cant have multiple services listening on port 80, as load balancer wont know where to route them (ingress will, however). If you can affort to host each service on different port you could use load balancer. alternatively, if you have public ip for each service and different backend port on each service you can achieve this.
quote: The protocol and port combination you entered matches another rule used by this load balancer. The protocol and port combination of each load balancing rule and inbound NAT rule on a load balancer must be unique.
again, if you are a developer, you probably do not realize how much more convenient it is to manage certificate on ingress, and not on all individual containers that are supposed to be accessible

How to deploy AKS (Azure container service) in a VPN?

I want to deploy some kubernetes workloads, which are visible from some other VM's on Azure but not visible from the outside world.
For example: I might have a VM running a Zuul Gateway which for some routes I want to redirect to the K8s cluster, yet I don't want to allow people to directly access my K8s cluster.
Is it possible to place my AKS inside a VPN? If so, how should I achieve this?
In addition to options, pointed out by #4c74356b41, you can run ingress controller on the cluster, and limit it to your internal server IP only
So this isnt possible now (at least out of the box) due to the nature of AKS being a service with no VNet integration as of yet. You can try to hack around this, but it will probably not work really well as your agents need to talk to the master.
I see 2 options:
Use internal load balancers instead of public ones to expose your services
Use ACS which has vnet integration, but I'm not sure if you can apply 2 routes to the same vnet

Resources