I have an Azure Kubernetes Service cluster, running version 1.15.7. This cluster recently replaced an older cluster version (1.12.something). In the past, once the various service pods were up and running, we would create a public IP resource in Azure portal and assign it a name, then create a Service resource like this:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
loadBalancerIP: 1.2.3.4
Finally, we'd add the public IP to a Traffic Manager instance.
Since upgrading to 1.15, this doesn't seem to work anymore. We can go through all the above steps, but as soon as the Service/Load Balancer is created, the public IP loses its DNS name, which causes it to be evicted from Traffic Manager. We can reset the name, but within 36-48 hours it gets lost again. My suspicion is that AKS is trying to apply a name to the associated IP address, but since I haven't defined one above, it just sets it to null.
How can I tell AKS what name to assign to a public IP? Better yet, can I skip the static public IP and let AKS provision a dynamic address and simply add the DNS name to Traffic Manager?
This is indeed a bug in AKS 1.15.7
Azure - PIP dns label will be default deleted
The upshot is, this is part of a new feature in 1.15 that allows the DNS label for a LoadBalancer IP to be set in the Service configuration. So, the definition above can become:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
annotations:
service.beta.kubernetes.io/azure-dns-label-name: myservice-frontend
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
And the service will be automatically assigned a new static IP with the annotated DNS name.
Related
In Azure, i am using helm to deploy a service (type=loadbalancer)
Below is the manifest file
apiVersion: v1
kind: Service
metadata:
name: {{ template "app.fullname" . }}-service-lb
labels:
app: {{ template "app.fullname" . }}-service-lb
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: {{.Values.appServPort}}
nodePort: {{.Values.lbPort}}
protocol: TCP
selector:
app: {{ template "app.fullname" . }}-service
Is it possible to tell kubernetes cluster to use a specific ip every time as an External IP. Whenever I deploy the service?
/*-- EDITED-- */
Every time the loadbalancer service is deployed, a new External ip is allocated, in my case wanted to specify to use the same ip, and assume that ip address is not used within the network.
/*---- */
My understanding is the Kubernetes cluster will allocate an External Ip everytime its deployed, it not specified in the manifest file.
There is an Azure documentation which details on how to use a static Ip within the manifest file and demo link.
I'm just quoting from the docs
If you would like to use a specific IP address with the internal load
balancer, add the loadBalancerIP property to the load balancer YAML
manifest. In this scenario, the specified IP address must reside in
the same subnet as the AKS cluster and must not already be assigned to
a resource. For example, you shouldn't use an IP address in the range
designated for the Kubernetes subnet.
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 10.240.0.25
ports:
- port: 80
selector:
app: internal-app
We have defined our internal Load Balancer.
apiVersion: v1
kind: Service
metadata:
name: ads-aks-test
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
selector:
app: ads-aks-test
It has its IP and External IP. We want to access this service from VM in another Virtual Network.
We need to know it's DNS name - fully qualified name in advance because we are deploying multiple applications from deployment platform and we want to know based on its Service Name how we can access it once it is being successfully deployed and not to wait for IP address to be determined (either manually or automatically). So for example that is our APP1, and after that automatically we install application APP2 which needs to reach this service.
So for that reason we would like to avoid using the IP information.
How we can determine what is the service "hostname" by which we will access it from the second application?
Only information in docs which I found is: "If your service is using a dynamic or static public IP address, you can use the service annotation service.beta.kubernetes.io/azure-dns-label-name to set a public-facing DNS label." - but this is for public load balancer which we do not want!
Set up ExternalDNS in your K8s cluster. Here is a guide for Azure Private DNS. This will allow you to update the DNS record for any hostname you pick for the service, dynamically via Kubernetes resources.
Sample config looks like this (excerpted from Azure Private DNS guide)
apiVersion: apps/v1
kind: Deployment
metadata:
name: externaldns
spec:
selector:
matchLabels:
app: externaldns
strategy:
type: Recreate
template:
metadata:
labels:
app: externaldns
spec:
containers:
- name: externaldns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --domain-filter=example.com
- --provider=azure-private-dns
- --azure-resource-group=externaldns
- --azure-subscription-id=<use the id of your subscription>
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
https://learn.microsoft.com/en-us/azure/aks/internal-lb
it seems you want this configuration? is there a peering? you also need to allow communication in NSG .
you can do kubectl get svc
and use the External IP of service ads-aks-test as in annotation you have mentioned "true" so it will be internal IP.
if you are looking forward to resolving the services name in the same cluster you can use the service name itself.
https://kubernetes.io/docs/concepts/services-networking/service/
you can do something like : your-svc.your-namespace.svc.cluster.local
note it will only work when services are in the same Kubernetes cluster.
I am using here to create a new AKS cluster. This has worked fine, however, when I look at the cluster I have noticed there is no External-IP (it shows )
How do I add an external IP address so that I can access the cluster externally?
I am using AKS within Azure
Paul
kubectl apply -f {name of this file}.yml
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
From https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
This will create a load balancer that has an external ip address. You can specify one if you have a static IP as well.
I have created a AKS and deployed a simple web server on it with following yaml.
Azure LoadBalancer gives a public IP address to it automatically and works fine.
Now I would like to limit the source IP address so I can access it from a specify IP address only.
I've tried adding a Azure Firewall to the virtual network of AKS (aks-vnet-XXXXXXX) with some network rule but doesn't work.
Creating a NAT rule in Firewall and redirects packets to the LoadBalancer works but I can still access the pod with the Public IP address of the LoadBalancer.
Any suggestions?
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
(skipped something not important)
spec:
containers:
- name: nginx
image: nginx:1.17.6
ports:
- containerPort: 80
What you're trying to achieve can be done with NSG (Network Security Group) applied to the subnet where your AKS cluster resides: https://learn.microsoft.com/en-us/azure/aks/concepts-security#network-security
More generic approach with a fine-grained control will require creation of Ingress Controller, creation of an Ingress object for your service and applying ingress.kubernetes.io/whitelist-source-range annotation to it.
I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster.
Anyway here is the deployement I currently wrote :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001
And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that :
mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
Why is the 1883/9001 port aren't forwarded like it should be ?
First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the
pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.
I dont see anything wrong, ports you requested are being forwarded to. And service created temporary ports on nodes for traffic to flow (it always does that). Service got endpoints, everything is okay.
Just to give more context, it always does that, because it needs to route traffic to some port, but it cannot depend on this exact port, because it might be occupied, so its using random port from 30.000 range (by default).
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you must specify a known and static port assignment, you can add nodePort: some-number to your ports definition in your service. By default, nodeports are assigned in 30000-32767.