Connection from a pod to a public ip backed by a LoadBalancer that go back to k8s - azure

I have deployed the nginx Ingress that deploys service of type LoadBalancer with a public IP. externalTrafficPolicy is set to Local to preserve the client IP.
The Azure load balancer is well configured will all the nodes and the healthcheck is there to "disable" the nodes without the LB pod.
From the direction Internet => pod, it is working well. But when a POD tries to make a request using the domain associated to the public IP of the LB it fails when the POD does not run on the same node than one of the POD of the LB.
For that node the ipvsadm -Ln command returns:
TCP PUBLICIP:80 rr
TCP PUBLICIP:443 rr
For the node that run the POD
TCP PUBLICIP:80 rr
-> 10.233.71.125:80 Masq 1 4 0
TCP PUBLICIP:443 rr
-> 10.233.71.125:443 Masq 1 0 0
The IPVS configuration seems legit according to the documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support (it is for AWS, but I guess it should be valid for Azure)
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport
Is it an issue or a limitation?
If it is a limitation how to workaround that? eg.
Deploy the LB as a DaemonSet, with the downside to have as much LB pod than node
Do not use the public domain but a kubernetes fqdn (not easy to implement)
Is there others solutions?
thank you!
Versions/Additional details:
k8s: 1.14.4
Cloud provider: Azure (not AKS)

I ended up implementing what is suggested in this issue and this article.
I added the following snippet to the CoreDNS ConfigMap
rewrite stop {
name regex example\.com nginx-ingress-controller.my-ns.svc.cluster.local
answer name nginx-ingress-controller.my-ns.svc.cluster.local example.com
}
It used the rewrite plugin. Worked well, the only downside is that it relies on a static definition of the ingress controller fqdn.

Related

How to get actual source IP address on Application Pod using HAproxy as Ingress

We have deployed haproxy as ingress on our Kubernetes Cluster in remote DC.
Our use case is to get actual source IP (Client IP) on application pod which is php 7.2 based and running in httpd. But we are receiving IP of ingress which is 193.168.100.15 (Although it is Public IP but being used as private network) in our Kubernetes.
193.168.100.15Unauthorized access.
It should be 203.99.50.227 as IP of our NAT device.
On Ingress I am using following annotations.
annotations:
haproxy.org/cors-allow-origin: "*"
ingress.kubernetes.io/enable-cors: "true"
haproxy.org/forwarded-for: "true"
and in app servcie yaml file I am using following annotation.
annotations:
haproxy.org/forwarded-for: "true"
Please guide.
Try setting service.spec.externalTrafficPolicy to local. This is probably dependent on the provider, but it appears to work for GKE and AKS.

how to create multiple loadbalancer ip addresses on one ingress controller on Azure AKS

I'm trying to setup multiple services on one k8s cluster, with one Ingress controller in front that does tls termination for all services.
This is a good example: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/multi-tls/multi-tls.yaml
I initially followed this example: https://github.com/brunoterkaly/ingress, and then expanded it to have multiple tls services.
By exposing my nginx replication controller AKS on Azure automatically creates a loadbalancer and a public ip address, to which I can apply an A record:
kubectl expose rc nginx-ingress-rc --port="80,443" --type="LoadBalancer"
However, I also want a second A record, that points to the same ip address (I guess?), so that I can access my ingress controller from different domains. I can't figure out how to let AKS create a second one for that purpose?
Maybe this is a bit too late and not exactly what the original post was asking for. Unofficially you can create multiple services for your ingress controller and thus also map multiple IPs. It is only limitation of single k8s service of type LoadBalancer that it can refer only single IP address.
In my case, I have AKS cluster with many namespaces and different applications reachable via different URLs. Each URL has different public IP address because of historical reasons:
Example:
first.example.com -> 1.2.3.4
second.example.com -> 5.6.7.8
...we could go on, IPs are just made up!
I wanted to install nginx ingress into AKS that would handle routing for all namespaces instead of handling each application with dedicated LoadBalancer and reverse proxy pair. For that I followed steps described here and provided just one of the IPs as controller.service.loadBalancerIP=”1.2.3.4” during ingress deployment process (using helm).
After this steps, I could see these services:
PS C:\> kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.141.84 1.2.3.4 80:30065/TCP,443:31444/TCP 14h
ingress-nginx-controller-admission ClusterIP 10.0.168.127 <none> 443/TCP 14h
After that I manually created another service ingress-nginx-controller-second with exactly same selector labels and next loadBalancerIP: 5.6.7.8.
After this steps, I could see these services:
PS C:\> kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.141.84 1.2.3.4 80:30065/TCP,443:31444/TCP 14h
ingress-nginx-controller-admission ClusterIP 10.0.168.127 <none> 443/TCP 14h
ingress-nginx-controller-second LoadBalancer 10.0.204.118 5.6.7.8 80:31275/TCP,443:32751/TCP 10m
Now if I list example ingresses that I defined for two applications, you can see, that both have the same first public IP shown. This is because that IP is used by ingress as a publish IP. Nevertheless, routing works nicely, since second service forwards all traffic to the same ingress-nginx-controller and you are able to do routing based on different hosts/paths just like always.
PS C:\Users\sk1u04h9> kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
first first-ingress <none> first.example.com 1.2.3.4 80, 443 14h
second second-ingress <none> second.example.com 1.2.3.4 80, 443 5m
I hope this helps to someone trying to migrate to nginx ingress without the need to have upfront all URLs pointing to single public IP. You can request to have all URLs pointing to just one IP if needed after this migration step and then you can of course remove all ingress-nginx-controller-* services, that will not be needed anymore.
Meanwhile I understand Ingress a bit better. Only one ip address is created for the ingress controller, which can support path and host based routing. See https://learn.microsoft.com/en-us/azure/aks/ingress for an example.
I'll just need to configure my DNS with different CNAME records for the A record that is associated with the Azure public IP address. If I want to use multiple *.cloudapp.azure.com fqdn's for my services, what I was trying to achieve at first, I'll have to use Azure DNS.

How to expose multiple kubernetes services trough single azure load balancer?

I want to expose multiple services trough single load balancer. Each service points to exactly one pod.
So far I tried to:
kubectl expose <podName> --port=7000
And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod.
So far I can connect to pod using external IP and specified port.
Depends on how you want to separate services on the same IP. The two ways that come to my mind are :
use NodePort services and then map some ports from your LB to that part on your cluster nodes. This gives separation by port.
way more interesting in my opinion is to use Ingress/IngressController. You would expose only IC on standard ports like 80 & 443 and then it will map to your services by hostname and uri
In Azure container service, Azure will use Load Balancer to expose k8s services, like this:
root#k8s-master-E27AE453-0:~# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jasonnginx 10.0.41.194 52.226.33.200 8080:32011/TCP 4m
kubernetes 10.0.0.1 <none> 443/TCP 11m
mynginx 10.0.144.49 40.71.230.60 80:32366/TCP 5m
yournginx 10.0.147.28 40.71.226.23 80:32289/TCP 4m
root#k8s-master-E27AE453-0:~#
Via Azure portal, check Azure load balancer frontend IP configuration(different IP address):
ACS will create Load Balancer rules and add rontend IP address automatically.
How to expose multiple kubernetes services trough single azure load
balancer?
ACS expose k8s services through that Azure Load Balancer, do you mean you want to expose k8s services with a single Public IP address?
If you want to expose k8s services with a single public IP address, as Radek said, maybe you should use Nginx Ingress Controller.
The Ingress Controller works like this:
Thanks guys. I think I have found viable solution to my problem. I should have been more specific about what I'm going to do.
I want to host game server over UDP. So any kubernetes ingress controller is not really an option, since they rarely support UDP routing.
I also don't need to host multitude of services on single machine 1-4 of pods per single node is probably the maximum.
I have found about using :
hostNetwork: true
in yaml config and it actually works pretty well for this scenario.
I get IP directly from host node. I can then select matching node within load balancer and create NAT or load balancing rule
Create multiple nodePort type service, and fix the nodePort.
And the cloud load balancer, set multiple listener groups. The listen port is same as the service's nodeport, and the target are all the worker nodes.

Limit access to public ip (whitelist)

I want to set up a kubernetes cluster with a loadbalancer. Kubernetes will create a load balancer in azure and connect a public ip address with it.
But I don't want to make the api public, it should be exclusive for my api management service.
I tried to direct the load balancer into a vnet with the api service but I found nothing.
So I thought I could just limit the access to the public ip (a whitelist with only the incluced ip of my service) but I found nothing on the internet.
Is it possible to set such rule on a public ip or do I need some extra service for this problem?
With Kubernetes, assuming you have a service defined
Use the following commands:
kubectl get service
kubectl edit svc/<YOUR SERVICE>
change the type from LoadBalancer to ClusterIP
Now you can consume internally the service.

How do I expose Kubernetes service to the internet?

I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.

Resources