How to get actual source IP address on Application Pod using HAproxy as Ingress - haproxy-ingress

We have deployed haproxy as ingress on our Kubernetes Cluster in remote DC.
Our use case is to get actual source IP (Client IP) on application pod which is php 7.2 based and running in httpd. But we are receiving IP of ingress which is 193.168.100.15 (Although it is Public IP but being used as private network) in our Kubernetes.
193.168.100.15Unauthorized access.
It should be 203.99.50.227 as IP of our NAT device.
On Ingress I am using following annotations.
annotations:
haproxy.org/cors-allow-origin: "*"
ingress.kubernetes.io/enable-cors: "true"
haproxy.org/forwarded-for: "true"
and in app servcie yaml file I am using following annotation.
annotations:
haproxy.org/forwarded-for: "true"
Please guide.

Try setting service.spec.externalTrafficPolicy to local. This is probably dependent on the provider, but it appears to work for GKE and AKS.

Related

Exposing Non HTTP Traffic on AKS Cluster

I have setup an AKS cluster, with a POD configured to run multiple Tomcat services. My Apache web server is outside the AKS cluster and hosted on a VM, but in the same subnet. Apache server sends a request to the Tomcat with ajp://10.x.x.x:5009/dbp_webui, which is inside the AKS cluster. I am looking for options on how to expose the Tomcat service, so that my Apache can make a successful connection.
You can use ingress to expose you service. From version 0.18.0 it supports AJP protocol.
https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md#0180. Intro into ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
You will probably need to set additional annotation to describe the backend protocol: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "AJP"
spec:
...
As #CSharpRocks mentioned in the comments, AKS nodes don't have public IP addresses by default. This means that a better option is to use LoadBalancerservice type.
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
It will deploy a LB that will route traffic to the Pod no matter on witch node it will resident. AFAIK with AKS have option to install Ingress out of the box, with a LB.
Edit
Scratch this
Easier way: use a NodePort type service:
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport

Check kubernetes pod name from other VM

I have a separate VM in the same network as my kubernetes in Azure.
I have a kafka pod and I am able to reach this pod using the IP. The problem is that the pod IP is changing all the time.
Is there any way to get the correct IP each time the pod IP is changing?
I would suggest using a kubernetes service to expose pod. This avoids the problem with change in POD IP because service IP does not change.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record
Since you are accessing the POD from outside the kubernetes cluster itself use NodePort or LoadBalancer type service.
As mentioned by #arghya-sadhu already going for Kubernetes service, is the best option. The kubernetes service has an IP depending on the type of kubernetes service.
For services of type ClusterIP, you get a cluster IP address
For services of type Load Balancer, you get a Loadbalancer IP address (i.e) public IP address
For services of type NodePort, you can access using the node's address.
But, whatever the type of service is, you can access the service using the kube-DNS within the cluster. So, let's say your service name is other-service and it exposes port 8080, running on namespace abc, then you can access the service as follows:
http://other-service.abc:8080
Since, your VM runs outside the cluster, it is better to use Loadbalancer and access the pod using Loadbalancer url or IP address. You can set an ingress in case there are multiple pods in the cluster that you want to connect to.

how to provide inbound access from public internet to an app hosted in an Azure private kubernetes cluster

I deployed an application in an Azure K8S cluster, using NGINX as gateway, with a public static IP, based on AKS & PUBLIC-IP and on AKS & NGINX.
Now I need to deploy the application in an Azure private cluster, ie, running in a private vnet (see CREATE PRIVATE AKS); attempting to assign a public static IP to NGINX does not work, which can be expected as the load-balancer expects a private IP, not a public IP.
How can I provide inbound access to my app hosted in a private cluster, using NGINX and a public static IP?
Hi you have two ways two achieve that...Depending on your needs (and Azure costs...):
1-Use Azure Application Gateway. For myself I use Terraform. And here you can the see official documentation regarding internal IP address.
Now you can use this one as your new Ingress (and get rid of NGINX) like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- backend:
serviceName: frontend
servicePort: 80
Or you could use NGINX internally as your ingress like explained on option 2.
2- First you must have a Public IP with a Load Balancer associated with it.The backend from that LB must be up to your needs.
But here is the trick...Do not create NGINX with that public IP but with an internal IP and an internal load balancer, you can see how to do that in the following url:
https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip
And the important thing you must do is the nginx ovveride on the helm parameters:
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Of course the internal VNET must be created an the load balancer IP must be a correct one.
And the final trick now that you have NGINX listening behind a private IP is to verify your traffic from the Public IP is redirected to that internal VNET...Of course it depends on how you have infrastructure setup behind that LB that holds the public IP.
As stated in the comment above you can do the same via Application Gateway in Azure. But if you are going to only use AKS you might want to just use Application Gateway as your ingress controller which is already created with the private cluster.
Please follow this to achieve the same https://microsoft.github.io/AzureTipsAndTricks/blog/tip256.html
Based on your description i understand that you want to have ingress traffic through your NGINX ingress controller which has a Loadbalancer service with static IP. If your deployment is correctly configured the a Loadbalancer service should be assigned to your NGINX ingress controller with a public IP. Since i dont know your namespaces, naming of deployments etc try:
kubectl get services --all-namespaces | grep -i loadbalancer
You should be able to find that an nginx loadbalancer service has a public IP. Now since NGINX is your ingress controller this means that you have a Layer 7 loadbalancer as ingress so you need to create an ingress route to your application running in AKS. This is documented here from Azure NGINX ingress but also here Ingress K8s

How to setup Aks Ingress with Azure Private DNS

I created AKS with internal ingress Nginx. This comes up like below in the cluster.
Then I created Azure private DNS Service. In the Azure private DNS service, I created a 'Record set' like
Blockquote
Technically, i should be able to access LoadBalancer External ip with promotion.mydomain.com (as example). Insted of this, I'm having '502 Bad Gateway' error when i hit http://promotion.mydomain.com in the browser. Any advice?
I faced the same issue and have been able to solve it recently.
I created another Ingress but in the desired namespace (mine was default) with the following definition :
(I have enabled tls but you can remove that part)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: my-custom-ingress
spec:
tls:
- hosts:
- foo.mydomain.com
secretName: my-tls-secret
rules:
- host: foo.mydomain.com
http:
paths:
- path: /
backend:
serviceName: my-foo-app-service-nodeport
servicePort: 4444
First of all find the EXTERNAL IP of your nginx ingress and keep it in mind:
kubectl get svc --namespace ingress-basic
Then in the Azure DNS zone you can attach the domain to an Azure Resource :
Open azure portal.
Go in the MC_... resource group created by your AKS cluster.
Find the LoadBalancer resource and click it.
On the LoadBalancer, go into "Frontend IP Configuration". You'll then see a list of public IP with a related ResourceId (example: 11.22.33.44 (xxx-yyyy-bbb))
Find the IP that is corresponding to the LoadBalancer IP you found on the LoadBalancer (before step 1) and memorize the associated object id.
Open you Azure DNS zone and create new domain (or edit one)
Set "Alias Record Set: Yes" then "Alias type: Resource"
Under "Azure Resource" find the resource that has the ResourceId you found in step 5 and select it.
Save
Now it should work.
I see your purpose is to create AKS with internal Ingress Nginx and use the custom DNS. And I see your Ingress external IP is 10.240.0.42. It seems it's a private IP of the subnet which you AKS nodes in.
So I think you need to create An Azure Application Gateway or Azure Load Balancer to route your request from the Internet to your internal Ingress Nginx interface. And the A record also needs to be changed, you need to change the IP into the public IP of the one which you choose from Azure Application Gateway and Azure Load Balancer. I think you know you need to update your custom DNS setting in the DNS server which you DNS in.
When all things are being done. The requests routing path will like this:
Internet ( your custom DNS)
Azure DNS Server
Azure Public IP of the Application Gateway or Load Balancer ( this is what I think you missed)
10.240.0.42 ( ingress Nginx internal IP)
AKS Ingress Nginx
Service
Deployment or Pod

Connection from a pod to a public ip backed by a LoadBalancer that go back to k8s

I have deployed the nginx Ingress that deploys service of type LoadBalancer with a public IP. externalTrafficPolicy is set to Local to preserve the client IP.
The Azure load balancer is well configured will all the nodes and the healthcheck is there to "disable" the nodes without the LB pod.
From the direction Internet => pod, it is working well. But when a POD tries to make a request using the domain associated to the public IP of the LB it fails when the POD does not run on the same node than one of the POD of the LB.
For that node the ipvsadm -Ln command returns:
TCP PUBLICIP:80 rr
TCP PUBLICIP:443 rr
For the node that run the POD
TCP PUBLICIP:80 rr
-> 10.233.71.125:80 Masq 1 4 0
TCP PUBLICIP:443 rr
-> 10.233.71.125:443 Masq 1 0 0
The IPVS configuration seems legit according to the documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support (it is for AWS, but I guess it should be valid for Azure)
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport
Is it an issue or a limitation?
If it is a limitation how to workaround that? eg.
Deploy the LB as a DaemonSet, with the downside to have as much LB pod than node
Do not use the public domain but a kubernetes fqdn (not easy to implement)
Is there others solutions?
thank you!
Versions/Additional details:
k8s: 1.14.4
Cloud provider: Azure (not AKS)
I ended up implementing what is suggested in this issue and this article.
I added the following snippet to the CoreDNS ConfigMap
rewrite stop {
name regex example\.com nginx-ingress-controller.my-ns.svc.cluster.local
answer name nginx-ingress-controller.my-ns.svc.cluster.local example.com
}
It used the rewrite plugin. Worked well, the only downside is that it relies on a static definition of the ingress controller fqdn.

Resources