Can i replace loadbalancer in aks cluster - azure

I am using multiple pods and their services, some of the services are of the type load balancer, which will expose the public IP.
But many of the services are called internally and no need to use public IP instead I can use private IP, what change do I need to make to the load balancer to use private IP.
I assume, the load balancer cost more compared to other types of services in the aks cluster.
Please let me know how to reduce the cost

Just do not annotate the services with type: LoadBalancer but instead use type: ClusterIP

Declare type:ClusterIP instead of type:LoadBalancer under kind:Service
It will generate Private IP for the service, which can be accessed with either IP or name of service.
http://<servicename>.<namespace>.svc.cluster.local:<port number>

You can annotate the service so that the Loadbalancer gets an private IP from your subnet:
service.beta.kubernetes.io/azure-load-balancer-internal: true
You can also check the docs here.
One hint: you should only expose the service from your Ingress Controller and not Services directly, This is an Kubernetes anti-pattern and insecure.

Related

Check kubernetes pod name from other VM

I have a separate VM in the same network as my kubernetes in Azure.
I have a kafka pod and I am able to reach this pod using the IP. The problem is that the pod IP is changing all the time.
Is there any way to get the correct IP each time the pod IP is changing?
I would suggest using a kubernetes service to expose pod. This avoids the problem with change in POD IP because service IP does not change.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record
Since you are accessing the POD from outside the kubernetes cluster itself use NodePort or LoadBalancer type service.
As mentioned by #arghya-sadhu already going for Kubernetes service, is the best option. The kubernetes service has an IP depending on the type of kubernetes service.
For services of type ClusterIP, you get a cluster IP address
For services of type Load Balancer, you get a Loadbalancer IP address (i.e) public IP address
For services of type NodePort, you can access using the node's address.
But, whatever the type of service is, you can access the service using the kube-DNS within the cluster. So, let's say your service name is other-service and it exposes port 8080, running on namespace abc, then you can access the service as follows:
http://other-service.abc:8080
Since, your VM runs outside the cluster, it is better to use Loadbalancer and access the pod using Loadbalancer url or IP address. You can set an ingress in case there are multiple pods in the cluster that you want to connect to.

how to provide inbound access from public internet to an app hosted in an Azure private kubernetes cluster

I deployed an application in an Azure K8S cluster, using NGINX as gateway, with a public static IP, based on AKS & PUBLIC-IP and on AKS & NGINX.
Now I need to deploy the application in an Azure private cluster, ie, running in a private vnet (see CREATE PRIVATE AKS); attempting to assign a public static IP to NGINX does not work, which can be expected as the load-balancer expects a private IP, not a public IP.
How can I provide inbound access to my app hosted in a private cluster, using NGINX and a public static IP?
Hi you have two ways two achieve that...Depending on your needs (and Azure costs...):
1-Use Azure Application Gateway. For myself I use Terraform. And here you can the see official documentation regarding internal IP address.
Now you can use this one as your new Ingress (and get rid of NGINX) like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- backend:
serviceName: frontend
servicePort: 80
Or you could use NGINX internally as your ingress like explained on option 2.
2- First you must have a Public IP with a Load Balancer associated with it.The backend from that LB must be up to your needs.
But here is the trick...Do not create NGINX with that public IP but with an internal IP and an internal load balancer, you can see how to do that in the following url:
https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip
And the important thing you must do is the nginx ovveride on the helm parameters:
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Of course the internal VNET must be created an the load balancer IP must be a correct one.
And the final trick now that you have NGINX listening behind a private IP is to verify your traffic from the Public IP is redirected to that internal VNET...Of course it depends on how you have infrastructure setup behind that LB that holds the public IP.
As stated in the comment above you can do the same via Application Gateway in Azure. But if you are going to only use AKS you might want to just use Application Gateway as your ingress controller which is already created with the private cluster.
Please follow this to achieve the same https://microsoft.github.io/AzureTipsAndTricks/blog/tip256.html
Based on your description i understand that you want to have ingress traffic through your NGINX ingress controller which has a Loadbalancer service with static IP. If your deployment is correctly configured the a Loadbalancer service should be assigned to your NGINX ingress controller with a public IP. Since i dont know your namespaces, naming of deployments etc try:
kubectl get services --all-namespaces | grep -i loadbalancer
You should be able to find that an nginx loadbalancer service has a public IP. Now since NGINX is your ingress controller this means that you have a Layer 7 loadbalancer as ingress so you need to create an ingress route to your application running in AKS. This is documented here from Azure NGINX ingress but also here Ingress K8s

AKS: How to get rid of load balancer?

When you create an Azure Kubernetes Service (AKS) it creates by default a load balancer and a networking set to access it. It creates it in a separate resource group.
We are however not interested at all in this load balancer, as we are going to use our own load balancer/Ingress configured within Kubernetes itself.
Question: How can we avoid this Load balancer from Azure to be generated all the time when we generate the cluster?
Was discussed in AKS Creation Without Public Load Balancer github issue with afterward explanation
It is possible the SLB IP is just for egress.
Let me try to clarify.
AKS with Basic Load Balancer
You can create it by passing the load balancer sku parameter Has
implicit Egress (you won't see a public IP, although it is there on
the Azure infrastructure) You can create private services accessible
only through private IPs using the internal annotation.
https://learn.microsoft.com/en-us/azure/aks/internal-lb AKS with
Standard Load Balancer
Used by default on latest clientes, or explicitly by using the same
parameter as above Has only explicit egress, which means if there
isn't an egress IP the cluster won't have egress and will be broken.
This is like the gateway to the internet IP. You can control,
pre-create or change this IP (or have more than one) You can create
private services accessible through private IPs using the internal
annotation. https://learn.microsoft.com/en-us/azure/aks/internal-lb In
some cases, enterprises might have egress defined via UDRs through a
firewall etc. In which case that egress IP will not be used, and will
be effectively not needed. But as of now it will be needed at create
time as we don't know the egress path defined. We are no working on a
UDR outbound type for SLB that will allow users to confirm they have
egress through UDRs and in this case the SLB won't be created with a
Public IP for egress
Like #Sajeetharan asked- what is the use case?
Also how is it mandatory for you to use AKS? Maybe the same you can easier resolve just regular with kubeadm cluster?
Deploying a Kubernetes cluster in Azure using kubeadm

How to expose multiple kubernetes services trough single azure load balancer?

I want to expose multiple services trough single load balancer. Each service points to exactly one pod.
So far I tried to:
kubectl expose <podName> --port=7000
And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod.
So far I can connect to pod using external IP and specified port.
Depends on how you want to separate services on the same IP. The two ways that come to my mind are :
use NodePort services and then map some ports from your LB to that part on your cluster nodes. This gives separation by port.
way more interesting in my opinion is to use Ingress/IngressController. You would expose only IC on standard ports like 80 & 443 and then it will map to your services by hostname and uri
In Azure container service, Azure will use Load Balancer to expose k8s services, like this:
root#k8s-master-E27AE453-0:~# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jasonnginx 10.0.41.194 52.226.33.200 8080:32011/TCP 4m
kubernetes 10.0.0.1 <none> 443/TCP 11m
mynginx 10.0.144.49 40.71.230.60 80:32366/TCP 5m
yournginx 10.0.147.28 40.71.226.23 80:32289/TCP 4m
root#k8s-master-E27AE453-0:~#
Via Azure portal, check Azure load balancer frontend IP configuration(different IP address):
ACS will create Load Balancer rules and add rontend IP address automatically.
How to expose multiple kubernetes services trough single azure load
balancer?
ACS expose k8s services through that Azure Load Balancer, do you mean you want to expose k8s services with a single Public IP address?
If you want to expose k8s services with a single public IP address, as Radek said, maybe you should use Nginx Ingress Controller.
The Ingress Controller works like this:
Thanks guys. I think I have found viable solution to my problem. I should have been more specific about what I'm going to do.
I want to host game server over UDP. So any kubernetes ingress controller is not really an option, since they rarely support UDP routing.
I also don't need to host multitude of services on single machine 1-4 of pods per single node is probably the maximum.
I have found about using :
hostNetwork: true
in yaml config and it actually works pretty well for this scenario.
I get IP directly from host node. I can then select matching node within load balancer and create NAT or load balancing rule
Create multiple nodePort type service, and fix the nodePort.
And the cloud load balancer, set multiple listener groups. The listen port is same as the service's nodeport, and the target are all the worker nodes.

Limit access to public ip (whitelist)

I want to set up a kubernetes cluster with a loadbalancer. Kubernetes will create a load balancer in azure and connect a public ip address with it.
But I don't want to make the api public, it should be exclusive for my api management service.
I tried to direct the load balancer into a vnet with the api service but I found nothing.
So I thought I could just limit the access to the public ip (a whitelist with only the incluced ip of my service) but I found nothing on the internet.
Is it possible to set such rule on a public ip or do I need some extra service for this problem?
With Kubernetes, assuming you have a service defined
Use the following commands:
kubectl get service
kubectl edit svc/<YOUR SERVICE>
change the type from LoadBalancer to ClusterIP
Now you can consume internally the service.

Resources