I'm trying to create a load balancer for azure Kubernetes deployment, I'm using the following yaml file
apiVersion: v1
kind: Service
metadata:
name: test-api-lb
spec:
type: LoadBalancer
loadBalancerIP : XXX.XXX.XXX.XXX
ports:
- port: 8080
selector:
app: test-api
and run it with
kubectl apply -f
What I need is to create a balancer with source IP affinity.
I found the following stuff https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode how to configure it on Azure and modes the LB supports. There is LoadBalancerDistribution attribute which specifies the mode type. Unfortunately, I didn't find any documentation how it could be done for Kubernetes deployment.
Thanks in advance
Rather than creating session affinity from the Azure LB to a specific node, you should configure it on the Kubernetes service by setting sessionAffinity to ClientIP as described here.
Related
We have defined our internal Load Balancer.
apiVersion: v1
kind: Service
metadata:
name: ads-aks-test
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
selector:
app: ads-aks-test
It has its IP and External IP. We want to access this service from VM in another Virtual Network.
We need to know it's DNS name - fully qualified name in advance because we are deploying multiple applications from deployment platform and we want to know based on its Service Name how we can access it once it is being successfully deployed and not to wait for IP address to be determined (either manually or automatically). So for example that is our APP1, and after that automatically we install application APP2 which needs to reach this service.
So for that reason we would like to avoid using the IP information.
How we can determine what is the service "hostname" by which we will access it from the second application?
Only information in docs which I found is: "If your service is using a dynamic or static public IP address, you can use the service annotation service.beta.kubernetes.io/azure-dns-label-name to set a public-facing DNS label." - but this is for public load balancer which we do not want!
Set up ExternalDNS in your K8s cluster. Here is a guide for Azure Private DNS. This will allow you to update the DNS record for any hostname you pick for the service, dynamically via Kubernetes resources.
Sample config looks like this (excerpted from Azure Private DNS guide)
apiVersion: apps/v1
kind: Deployment
metadata:
name: externaldns
spec:
selector:
matchLabels:
app: externaldns
strategy:
type: Recreate
template:
metadata:
labels:
app: externaldns
spec:
containers:
- name: externaldns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --domain-filter=example.com
- --provider=azure-private-dns
- --azure-resource-group=externaldns
- --azure-subscription-id=<use the id of your subscription>
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
https://learn.microsoft.com/en-us/azure/aks/internal-lb
it seems you want this configuration? is there a peering? you also need to allow communication in NSG .
you can do kubectl get svc
and use the External IP of service ads-aks-test as in annotation you have mentioned "true" so it will be internal IP.
if you are looking forward to resolving the services name in the same cluster you can use the service name itself.
https://kubernetes.io/docs/concepts/services-networking/service/
you can do something like : your-svc.your-namespace.svc.cluster.local
note it will only work when services are in the same Kubernetes cluster.
Good guys
Let's see if someone can help me.
I have configured Azure Kubernetes (AKS) in version 1.13
I am trying to create an Ingress with static IP, but it is impossible for me.
I am use kubectl create -f static-ip-svc.yaml
#File
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: 40.121.219.126
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
and show error:
Type Reason Age From
Message ---- ------ ---- ----
------- Normal EnsuringLoadBalancer 8s (x4 over 43s) service-controller Ensuring load balancer Warning
CreatingLoadBalancerFailed 7s (x4 over 43s) service-controller
Error creating load balancer (will retry): failed to ensure load
balancer for service default/nginx-ingress-lb: timed out waiting for
the condition
For you, to create an Ingress with static IP, there are two ways to achieve it. But firstly, you need to know the info that resource groups for Azure Kubernetes service and its infrastructure.
The info is Why are two resource groups created with AKS? It explains that there are two resource groups, one for the Azure Kubernetes service itself and another for its infrastructure. So that there are two ways to create an Ingress with static IP.
Here the two ways:
Use the static IP which created in the group named MC_xxxx_xxxx_location.
Use the static IP which created in another group except for the group MC_xxxx_xxxx_location. In this way, you need to assign enough permission to the service principal of AKS, at least is "Network Contributor".
You get more details about "Use a static public IP address with the Azure Kubernetes Service (AKS) load balancer" here. I think you use the second way but you do not assign enough permission so that you got the error. Check the steps and try again following the steps.
We have created the kubernetes cluster on the azure VM, with Kube master and two nodes. We have deployed application and created the service with "NodePort" which works well. But when we try to use the type: LoadBalancer then it create service but the external IP goes pending state. Currently we unable create service type load balance and due this "ingress" nginx controller also going to same state. So we are not sure how do we setup load balancing in this case.
We have tried creating Load Balancer in Azure and trying to use that ip like shown below in service.
kind: Service
apiVersion: v1
metadata:
name: jira-service
labels:
app: jira-software
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: jira-software
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
ports:
- name: jira-http
port: 8080
targetPort: jira-http
similarly we have one more application running on this kube cluster and we want to access application based on the context path.
if we invoke jira it should call backend server jira http://dns-name/jira
if we invoke some other app like bitbucket http://dns-name/bitbukcet
If I understand correctly you used type LoadBalancer in Virtual Machine, which will not work - type LoadBalancer works only in managed Kubernetes services like GKE, AKS etc.
You can find more information here.
I have to create a Kubernetes cluster in MS Azure manually, not using AKS. So:
I've created 2 VM's in one Availability set: one for k8s master and second for k8s node.
I've created External Load Balancer and add 2 VM's to the backend pool.
I've created k8s cluster using kubespray.
I've created Deployment and LoadBalancer Service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wrapper
spec:
replicas: 2
template:
metadata:
labels:
app: wrapper
spec:
containers:
- name: wrapper
image: wrapper:latest
ports:
- containerPort: 8080
name: wrapper
---
apiVersion: v1
kind: Service
metadata:
name: wrapper
spec:
loadBalancerIP: <azure_loadbalancer_public_ip>
type: LoadBalancer
ports:
- port: 8080
selector:
app: wrapper
But LoadBalancer service External-IP is always pending:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP
wrapper LoadBalancer 10.233.38.7 <pending>
Also, telnet azure_loadbalancer_public_ip doesn't work. I've tried to use NodePort instead of LoadBalancer, but in that case, I have two endpoints for my service on k8s master and on k8s node.
What I want is one entrypoint: azure_loadbalancer_public_ip, that is balances traffic between all nodes in the cluster.
Could you please help me to understand what I'm doing wrong and is it possible to "bind" Azure External Load Balancer with LoadBalancer service in Kubernetes?
You dont have to do that, k8s (when its configured properly) handles that for you. All you have to do it give it proper rights to be able to create a load balancer in Azure.
It basically can't talk to the Azure API to create a Load Balancer. You basically need to:
Add this option: --cloud-provider=azure to your kube-apiserver, kube-controller-manager and all the kubelets running on your nodes.
Make sure that your Azure VM has access to the Azure API
Restart all the components from 1.
This is not needed if you have the Cloud Controller Manager installed which is Beta in K8s 1.12 as of this writing. Note that the --cloud-provider option will be deprecated at some point in favor of this.
I have got the following service for Kubernetes dashboard
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Annotations: kubectl.kubernetes.io/last-applied-configuration={"kind":"Service","apiVersion":"v1","metadata":{"name":"kubernetes-dashboard","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"k...
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.0.106.144
Port: <unset> 80/TCP
NodePort: <unset> 30177/TCP
Endpoints: 10.244.0.11:9090
Session Affinity: None
Events: <none>
According to the documentation, I ran
az acs kubernetes browse
and it works on http://localhost:8001/ui
But I want to access it outside the cluster too. The describe output says that it is exposed using NodePort on port 30177.
But I'm not able to access it on http://<any node IP>:30177
As we know, expose the service to internet, we can use nodeport and LoadBalancer.
As far as I know, Azure does not support nodeport type now.
But I want to access it outside the cluster too.
we can use LoadBalancer to re-create the kubernetes dashboard, here are my steps:
Delete kubernetes-dashboard via kubernetes UI: select Namespace to kube-system, then select services, then delete it:
Modify Kubernetes-dashboard-service.yaml: SSH master VM, then change type from nodeport to LoadBalancer:
root#k8s-master-47CAB7F6-0:/etc/kubernetes/addons# vi kubernetes-dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: "true"
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
start kubernetes browse from CLI 2.0:
C:\Users>az acs kubernetes browse -g k8s -n containerservice-k8s
Then SSH to master VM to check the status:
Now, we can via the Public IP address to browse the UI:
Update:
The following image shows the architecture of azure container service cluster(Kubernetes), we should use Load-Balancer to expose the service to internet.
On second thought, this actually is expected to NOT work. The only public IP in the cluster, by default, is for the load balancer on the masters. And that load balancer obviously is not configured to forward random ports (like 30000-32767 for example). Further, none of the nodes directly have a public IP, so by definition NodePort is not going to work external to the cluster.
The only way you're going to make this work is by giving the nodes public IP addresses directly. This is not encouraged for a variety of reasons.
If you merely want to avoid waiting... then I suggest:
Don't delete the Service. Most dev scenarios should just be kubectl apply -f <directory> in which case you don't really need to wait for the Service to re-provision
Use Ingress along with 'nginx-ingress-controller' so that you only need to wait for the full LB+NSG+PublicIP provisioning once, and then can just add/remove Ingress objects in your dev scenario.
Use minikube for development scenarios, or manually add public ips to the nodes to make the NodePort scenario work.
You can't expose the service via nodeport by running the kubectl expose command, you get a VIP address outside the range of the subnets your cluster sits on... Instead, deploy a service through a yaml file and you can specify an internal load balancer as a type..., which will give you a local IP on the Master subnet, which you can connect to via the internal network...
Or, you can just expose the service with an external load balancer and get a public ip. available on the www.