Kubernetes - different "Services" for TCP connection outside the cluster [closed] - azure

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am using Azure Kubernetes Service (AKS) and I need Services solely for TCP connections. I do not need for HTTP at all, I think it is important to emphasize that. For accessing our Service, outside the AKS cluster, but only from different virtual network VM (and NOT from the public internet) the conclusion from the Azure Admins was that I should:
expose my Pod App Service with ClusterIP
install NGINX Ingress Controller with internal Load Balancer type to connect to ClusterIp service
I created kind:Ingress resource as well BUT that was NOT needed at all since I am not using HTTP connection, but as I mentioned I am accessing to my containerized app from outside solely using the NGINX IP address and port. And everything works with this Ingress Controller in place.
I read a lot about Kubernetes services , types, connections and Ingress and I would like to summarize my dilemmas and confusions on which I would need answer - why I had to implement this approach and not some simpler networking architecture without Ingress? (since I am not using HTTP)
ClusterIP - It's used for accessing the Service from any node in the Cluser using the CLUSTER-IP:PORT
NodePort - It's used for accessing the Service outside using the Node using the NodeIP:NodePort
Question Number 1: Why I couldn't just use NodeIP:NodePort for accessing my Service using the TCP from the different Azure Virtual Network??? Of course firewall rules needs to be configured, but why this approach is not acceptable and I had to install Ingress Controller?
LoadBalancer - Exposes the Service externally using a cloud provider's load balancer. OK I must not use the Public Load Balancer so that is clear but why I couldn't use LoadBalancer with internal LoadBalancer Type? This is explained and mentioned on the following link:
https://learn.microsoft.com/en-us/azure/aks/internal-lb where they are stating that "...accessible only to applications running in the same virtual network as the Kubernetes cluster". It uses EXTERNAL_IP:PORT value for accessing the Service.
Question Number 2: Doesn't really exist some other approach to use LoadBalancer for accessing the Service from the different virtual network but not to be exposed publicly on the Internet? Does it really have to include more complex networking architecture with Ingress Controller that needs to be created? Again I must emphasize only for TCP and not for HTTP connections use case
Question Number 3: what is the usual and regular network Service and setup that can be used in order to connect from different virtual network? Again - only for TCP connection, and if important I am looking more generally for Azure.
I will appreciate full explanation for all 3 questions, since this kind of Use Cases are not mentioned explicitly in kubernetes.io documentation - they always mentioned that Ingress resources are used for HTTP but definetly based on the instructions which I got from Admins this is also the case for TCP problematic
Thanks

Related

.Net Core microservices: using HTTPS on Kubernetes on Azure (AKS) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm in the process of containerizing various .NET core API projects and running them in a Kubernetes cluster using Linux. I'm fairly new to this scenario (I usually use App Services with Windows) and a questions regarding best practices regarding secure connections are starting to come up:
Since these will run as pods inside the cluster my assumption is that I only need to expose port 80 correct? It's all internal traffic managed by the service and ingress. But is this a good practice? Will issues arise once I configure a domain with a certificate and secure traffic starts hitting the running pod?
When the time comes to integrate SSL will I have have to worry about opening up port 443 on the containers or managing any certificates within the container itself or will this all be managed by Ingress, Services (or Application Gateway since I am using AKS)? Right now when I need to test locally using HTTPS I have to add a self-signed certificate to the container and open port 443 and my assumption is this should not be in place for production!
When I do deploy into my cluster (I'm using AKS) with just port 80 open and I assign a LoadBalancer service I get a Public IP address. I'm used to using Azure App Services where you can use the global Miscrosoft SSL certificate right out of the box like so: https://your-app.azurewebsites.net However when I go to the Public IP and configure a DNS label for something like:
your-app.southcentralus.cloudapp.azure.com It does not allow me to use HTTPS like App Services does. Neither does the IP address. Maybe I don't have something configured properly with my Kubernetes instance?
Since many of these services are going to be public facing API endpoints (but consumed by a client application) they don't need to have a custom domain name as they won't be seen by the majority of the public. Is there a way to leverage secure connections with the IP address or the .cloudapp.azure.com domain? It would be cost/time prohibitive if I have to manage certificates for each of my services!
It depends on where you want to terminate your TLS. For most use cases, the ingress controller is a good place to terminate the TLS traffic and keep everything on HTTP inside the cluster. In that case, any HTTP port should work fine. If port 80 is exposed by Dotnet core by default then you should keep it.
You are opening port 443 locally because you don't have the ingress controller configured. You can install ingress locally as well. In production, you would not need to open any other ports beyond a single HTTP port as long as the ingress controller is handling the TLS traffic.
Ideally, you should not expose every service as Load Balancer. The services should be of type ClusterIP, only exposed inside the cluster. When you deploy an ingress controller, it will create a Load Balancer service. That will be the only entry point in the cluster. Ingress controller will then accept and route traffic to individual services by either hostname or paths.
Let's Encrypt is a free TLS certificate signing service that you can use for your setup. If you don't own the domain name, you can use https-01 challenge to verify your identity and get the certificate. Cert Manager project makes it easy to configure Let's Encrypt certificates in any k8s clusters.
https://cert-manager.io/docs/installation/kubernetes/
https://cert-manager.io/docs/tutorials/acme/ingress/ (Ignore the Tillter part if you have deployed it using kubectl or helm3)
Sidebar: If you are using Application Gateway to front your applications, consider using Application Gateway Ingress Controller

When to use external LoadBalancer in K8s?

Explaining my confusion / lack of understanding
When reading about the external LoadBalancer in K8s, which is a cloud provider only feature, I don't quite understand when it should be used, as when one creates a Deployment K8s will do Round Robin load balancing on the pods in that Deployment.
So from my current understanding all one would need to do is make a NodeIP, and you have the equivalent of an external load balancer?
Or should I think of the LoadBalancer type as haproxy/nginx/Envoy, where one can do SSL, reverse proxy, and many other useful things?
My current guess is that the proper use of LoadBalancer is to add many NodeIP's, but I can't find anything to back that up.
Question
Can anyone explain when and why to use LoadBalancer and not just using the NodeIP?
For example, You want to deploy multiple applications in your cluster, say 10 apps.
You would like to access these 10 apps over internet. One way is to set those 10 application services as nodeport so you can access them from outside. For this to happen kubernetes opens 10 nodeports on each cluster node. This is a security risk.
In most of the enterprises where they work behind firewall in a closed network dont allow external traffic to/from any ports other than http/https ( 80/443 ).
One way is to set service type as Loadbalancer for each application service. So, to access 10 app, you will be provisioning 10 load balancers to access the app servers over http/https ports. Since loadbalancers are charged resources, economically it is not viable to have one load balancer for each service that you want to access over itnernet.
Is there a way to access all those 10 app services running inside kubernetes over single port. This is where ingress controller comes into picture.
Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet

Internet connection from a pod inside an AKS cluster

I am trying to send an http request from my pod to the outside, but it seems impossible.
I currently have implemented a loadbalancer with fixed IP, but so far I just tested connections to the service.
Is there any specific constraint for it? Is it possible to overcome the issue?
Your worker nodes where your pods live are probably in private subnets (it's good practice to keep them there) and if that is the case then it's not Kubernetes problem. You should setup NAT to allow outbound traffic. I'm not familiar with Azure, but you should also check other abstractions that control your traffic (like Security Group or NACLs in AWS)

What are the Advantages of using Kubernetes Ingress in Azure AKS

My understanding is that setting the Service type to LoadBalancer creates a new Azure Load Balancer and assigns an IP address to the Service. Does this mean that I can have multiple Services using port 80? If the app behind my Service (an ASP.NET Core app) can handle TLS and HTTPS why shouldn't I just use LoadBalancer's for any Service I want to expose to the internet?
What is the advantage of using an Ingress if I don't care about TLS termination (You can let Cloudflare handle TLS termination)? If anything, it slows things down by adding an extra hop for every request.
Update
Some answers below mention that creating load balancers is costly. It should be noted that load balancers on Azure are free but they do charge for IP addresses of which they give you five for free. So for small projects where you want to expose up to five IP addresses, it's essentially free. Any more than that, then you may want to look ad usign Ingress.
Some answers also mention extra complexity if you don't use Ingress. I have already mentioned that Cloudflare can handle TLS termination for me. I've also discovered the external-dns Kubernetes project to create DNS entries in Cloudflare pointing at the load balancers IP address? It seems to me that cutting out Ingress reduces complexity as it's one less thing that I have to configure and manage. The choice of Ingress is also massive, it's likely that I'll pick the wrong one which will end up unmaintained after some time.
There is a nice article here which describe the differences on Service(Load Balancer) and Ingress.
In summary, you can have multiple Service(Load Balancer) in the cluster, where each application is exposed independently from each other. The main issue is that each Load Balancer added will increase the cost of your solution, and does not have to be this way, unless you strictly need this.
If multiple applications listen on port 80, and they are inside the container, there is no reason you also need to map it to the port 80 in the host node. You can assign it to any port, because the Service will handle the dynamic port mappings for you.
The ingress is best in this scenario, because you can have one ingress listing on port 80, and route the traffic to the right service based on many variables, like:
Domain
Url Path
Query String
And many other
Ingress in not just for TLS termination, it is in simple terms a proxy\gateway that will control the routing to the right service, TLS termination is just one of the features.
No, you cant have multiple services listening on port 80, as load balancer wont know where to route them (ingress will, however). If you can affort to host each service on different port you could use load balancer. alternatively, if you have public ip for each service and different backend port on each service you can achieve this.
quote: The protocol and port combination you entered matches another rule used by this load balancer. The protocol and port combination of each load balancing rule and inbound NAT rule on a load balancer must be unique.
again, if you are a developer, you probably do not realize how much more convenient it is to manage certificate on ingress, and not on all individual containers that are supposed to be accessible

Accessing Mongo replicas in kubernetes cluster from AWS lambdas

Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.

Resources