AKS Cluster Created has no External IP Address - azure

I am using here to create a new AKS cluster. This has worked fine, however, when I look at the cluster I have noticed there is no External-IP (it shows )
How do I add an external IP address so that I can access the cluster externally?
I am using AKS within Azure
Paul

kubectl apply -f {name of this file}.yml
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
From https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
This will create a load balancer that has an external ip address. You can specify one if you have a static IP as well.

Related

Kubernetes - service type LoadBalancer to use specific ip address every time deployed in AKS

In Azure, i am using helm to deploy a service (type=loadbalancer)
Below is the manifest file
apiVersion: v1
kind: Service
metadata:
name: {{ template "app.fullname" . }}-service-lb
labels:
app: {{ template "app.fullname" . }}-service-lb
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: {{.Values.appServPort}}
nodePort: {{.Values.lbPort}}
protocol: TCP
selector:
app: {{ template "app.fullname" . }}-service
Is it possible to tell kubernetes cluster to use a specific ip every time as an External IP. Whenever I deploy the service?
/*-- EDITED-- */
Every time the loadbalancer service is deployed, a new External ip is allocated, in my case wanted to specify to use the same ip, and assume that ip address is not used within the network.
/*---- */
My understanding is the Kubernetes cluster will allocate an External Ip everytime its deployed, it not specified in the manifest file.
There is an Azure documentation which details on how to use a static Ip within the manifest file and demo link.
I'm just quoting from the docs
If you would like to use a specific IP address with the internal load
balancer, add the loadBalancerIP property to the load balancer YAML
manifest. In this scenario, the specified IP address must reside in
the same subnet as the AKS cluster and must not already be assigned to
a resource. For example, you shouldn't use an IP address in the range
designated for the Kubernetes subnet.
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 10.240.0.25
ports:
- port: 80
selector:
app: internal-app

Azure Kubernetes - How to determine DNS name that can be used for INTERNAL Load Balancer?

We have defined our internal Load Balancer.
apiVersion: v1
kind: Service
metadata:
name: ads-aks-test
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
selector:
app: ads-aks-test
It has its IP and External IP. We want to access this service from VM in another Virtual Network.
We need to know it's DNS name - fully qualified name in advance because we are deploying multiple applications from deployment platform and we want to know based on its Service Name how we can access it once it is being successfully deployed and not to wait for IP address to be determined (either manually or automatically). So for example that is our APP1, and after that automatically we install application APP2 which needs to reach this service.
So for that reason we would like to avoid using the IP information.
How we can determine what is the service "hostname" by which we will access it from the second application?
Only information in docs which I found is: "If your service is using a dynamic or static public IP address, you can use the service annotation service.beta.kubernetes.io/azure-dns-label-name to set a public-facing DNS label." - but this is for public load balancer which we do not want!
Set up ExternalDNS in your K8s cluster. Here is a guide for Azure Private DNS. This will allow you to update the DNS record for any hostname you pick for the service, dynamically via Kubernetes resources.
Sample config looks like this (excerpted from Azure Private DNS guide)
apiVersion: apps/v1
kind: Deployment
metadata:
name: externaldns
spec:
selector:
matchLabels:
app: externaldns
strategy:
type: Recreate
template:
metadata:
labels:
app: externaldns
spec:
containers:
- name: externaldns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --domain-filter=example.com
- --provider=azure-private-dns
- --azure-resource-group=externaldns
- --azure-subscription-id=<use the id of your subscription>
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
https://learn.microsoft.com/en-us/azure/aks/internal-lb
it seems you want this configuration? is there a peering? you also need to allow communication in NSG .
you can do kubectl get svc
and use the External IP of service ads-aks-test as in annotation you have mentioned "true" so it will be internal IP.
if you are looking forward to resolving the services name in the same cluster you can use the service name itself.
https://kubernetes.io/docs/concepts/services-networking/service/
you can do something like : your-svc.your-namespace.svc.cluster.local
note it will only work when services are in the same Kubernetes cluster.

Azure k8s load balancer DNS name

I have an Azure Kubernetes Service cluster, running version 1.15.7. This cluster recently replaced an older cluster version (1.12.something). In the past, once the various service pods were up and running, we would create a public IP resource in Azure portal and assign it a name, then create a Service resource like this:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
loadBalancerIP: 1.2.3.4
Finally, we'd add the public IP to a Traffic Manager instance.
Since upgrading to 1.15, this doesn't seem to work anymore. We can go through all the above steps, but as soon as the Service/Load Balancer is created, the public IP loses its DNS name, which causes it to be evicted from Traffic Manager. We can reset the name, but within 36-48 hours it gets lost again. My suspicion is that AKS is trying to apply a name to the associated IP address, but since I haven't defined one above, it just sets it to null.
How can I tell AKS what name to assign to a public IP? Better yet, can I skip the static public IP and let AKS provision a dynamic address and simply add the DNS name to Traffic Manager?
This is indeed a bug in AKS 1.15.7
Azure - PIP dns label will be default deleted
The upshot is, this is part of a new feature in 1.15 that allows the DNS label for a LoadBalancer IP to be set in the Service configuration. So, the definition above can become:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
annotations:
service.beta.kubernetes.io/azure-dns-label-name: myservice-frontend
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
And the service will be automatically assigned a new static IP with the annotated DNS name.

A DNS address within kubernetes cluster

I was following this kubernetes tutorial in order to set up a DNS service and connect together two separate kubernetes pods. The one, which should serve as a gateway, is listening on port 80, the other one on port 90.
When I use their Node IP, curl 10.32.0.24 and curl 10.32.0.25:90 I can reach them. Nevertheless I can't figure out, how to access them via my DNS service. What the URL will be?
The Namespace is default and this is the result of kubectl cluster-info:
Kubernetes master is running at IP_OF_MY_SERVER:6443
KubeDNS is running at IP_OF_MY_SERVER:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
My deployment.yaml is almost the same as in the tutorial:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 80
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: time-provider
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: gateway
name: busybox
The Kubernetes DNS service works inside a cluster and provide DNS names for pods, not for external services.
Here is an extract from the instruction you used:
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain. This is best illustrated by example:
Assume a Service named foo in the Kubernetes namespace bar. A Pod running in namespace bar can look up this service by simply doing a DNS query for foo. A Pod running in namespace quux can look up this service by doing a DNS query for foo.bar.
So, the DNS names of your resources inside a cluster exist only in it.
You call to the service from the external network by NodeIPs : curl 10.32.0.24 and curl 10.32.0.25:90. And that is a correct way. If you want to use a DNS names to connect to the cluster from outside, you should use any other DNS service to point names to your cluster nodes or LoadBalancer.
I recommend you to use Service object to expose your application. Here is a some articles about it: ways to connect, use a Service to access applications.

Does NodePort work on Azure Container Service (Kubernetes)

I have got the following service for Kubernetes dashboard
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Annotations: kubectl.kubernetes.io/last-applied-configuration={"kind":"Service","apiVersion":"v1","metadata":{"name":"kubernetes-dashboard","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"k...
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.0.106.144
Port: <unset> 80/TCP
NodePort: <unset> 30177/TCP
Endpoints: 10.244.0.11:9090
Session Affinity: None
Events: <none>
According to the documentation, I ran
az acs kubernetes browse
and it works on http://localhost:8001/ui
But I want to access it outside the cluster too. The describe output says that it is exposed using NodePort on port 30177.
But I'm not able to access it on http://<any node IP>:30177
As we know, expose the service to internet, we can use nodeport and LoadBalancer.
As far as I know, Azure does not support nodeport type now.
But I want to access it outside the cluster too.
we can use LoadBalancer to re-create the kubernetes dashboard, here are my steps:
Delete kubernetes-dashboard via kubernetes UI: select Namespace to kube-system, then select services, then delete it:
Modify Kubernetes-dashboard-service.yaml: SSH master VM, then change type from nodeport to LoadBalancer:
root#k8s-master-47CAB7F6-0:/etc/kubernetes/addons# vi kubernetes-dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: "true"
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
start kubernetes browse from CLI 2.0:
C:\Users>az acs kubernetes browse -g k8s -n containerservice-k8s
Then SSH to master VM to check the status:
Now, we can via the Public IP address to browse the UI:
Update:
The following image shows the architecture of azure container service cluster(Kubernetes), we should use Load-Balancer to expose the service to internet.
On second thought, this actually is expected to NOT work. The only public IP in the cluster, by default, is for the load balancer on the masters. And that load balancer obviously is not configured to forward random ports (like 30000-32767 for example). Further, none of the nodes directly have a public IP, so by definition NodePort is not going to work external to the cluster.
The only way you're going to make this work is by giving the nodes public IP addresses directly. This is not encouraged for a variety of reasons.
If you merely want to avoid waiting... then I suggest:
Don't delete the Service. Most dev scenarios should just be kubectl apply -f <directory> in which case you don't really need to wait for the Service to re-provision
Use Ingress along with 'nginx-ingress-controller' so that you only need to wait for the full LB+NSG+PublicIP provisioning once, and then can just add/remove Ingress objects in your dev scenario.
Use minikube for development scenarios, or manually add public ips to the nodes to make the NodePort scenario work.
You can't expose the service via nodeport by running the kubectl expose command, you get a VIP address outside the range of the subnets your cluster sits on... Instead, deploy a service through a yaml file and you can specify an internal load balancer as a type..., which will give you a local IP on the Master subnet, which you can connect to via the internal network...
Or, you can just expose the service with an external load balancer and get a public ip. available on the www.

Resources