I'm trying to create two kubernetes services, one which is a loadbalancer with a cluster IP, and another which is a headless (no cluster IP), but instead returns an A record round robin collection of the pod ip addresses (as it should do, according to http://kubernetes.io/docs/user-guide/services/#headless-services).
I need to do this because I need a dynamic collection of pod ip's in order to do auto clustering and service discovery.
My services look like this:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
tier: messaging
spec:
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rabbitmq
tier: messaging
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-cluster
labels:
app: rabbitmq
tier: messaging
spec:
clusterIP: None
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rabbitmq
tier: messaging
With these two services, i get the following:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq 10.23.255.174 <none> 5672/TCP 7m
rabbitmq-cluster None <none> 5672/TCP 7m
And DNS (from another pod) for the cluster IP works:
[root#gateway-3738159135-a7wp9 app]# nslookup rabbitmq.td-integration
Server: 10.23.240.10
Address: 10.23.240.10#53
Name: rabbitmq.td-integration.svc.cluster.local
Address: 10.23.255.174
However, the dns for the 'headless' service, doesn't return:
[root#gateway-3738159135-a7wp9 app]# nslookup rabbitmq-cluster.td-integration
Server: 10.23.240.10
Address: 10.23.240.10#53
** server can't find rabbitmq-cluster.td-integration: NXDOMAIN
It seems like there is no pod matching these labels within your cluster, therefore the DNS query doesn't return anything. This is expected.
Start the corresponding pods and you should see a list of A records.
Please be aware that these A records are not shuffled as far as I know, so your clients are expected to consume the DNS answer and perform their own round robin.
Related
I am using here to create a new AKS cluster. This has worked fine, however, when I look at the cluster I have noticed there is no External-IP (it shows )
How do I add an external IP address so that I can access the cluster externally?
I am using AKS within Azure
Paul
kubectl apply -f {name of this file}.yml
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
From https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
This will create a load balancer that has an external ip address. You can specify one if you have a static IP as well.
I have an Azure Kubernetes Service cluster, running version 1.15.7. This cluster recently replaced an older cluster version (1.12.something). In the past, once the various service pods were up and running, we would create a public IP resource in Azure portal and assign it a name, then create a Service resource like this:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
loadBalancerIP: 1.2.3.4
Finally, we'd add the public IP to a Traffic Manager instance.
Since upgrading to 1.15, this doesn't seem to work anymore. We can go through all the above steps, but as soon as the Service/Load Balancer is created, the public IP loses its DNS name, which causes it to be evicted from Traffic Manager. We can reset the name, but within 36-48 hours it gets lost again. My suspicion is that AKS is trying to apply a name to the associated IP address, but since I haven't defined one above, it just sets it to null.
How can I tell AKS what name to assign to a public IP? Better yet, can I skip the static public IP and let AKS provision a dynamic address and simply add the DNS name to Traffic Manager?
This is indeed a bug in AKS 1.15.7
Azure - PIP dns label will be default deleted
The upshot is, this is part of a new feature in 1.15 that allows the DNS label for a LoadBalancer IP to be set in the Service configuration. So, the definition above can become:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
annotations:
service.beta.kubernetes.io/azure-dns-label-name: myservice-frontend
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
And the service will be automatically assigned a new static IP with the annotated DNS name.
I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster.
Anyway here is the deployement I currently wrote :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001
And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that :
mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
Why is the 1883/9001 port aren't forwarded like it should be ?
First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the
pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.
I dont see anything wrong, ports you requested are being forwarded to. And service created temporary ports on nodes for traffic to flow (it always does that). Service got endpoints, everything is okay.
Just to give more context, it always does that, because it needs to route traffic to some port, but it cannot depend on this exact port, because it might be occupied, so its using random port from 30.000 range (by default).
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you must specify a known and static port assignment, you can add nodePort: some-number to your ports definition in your service. By default, nodeports are assigned in 30000-32767.
I've set-up an AKS cluster and am now trying to connect to it. My deployment YAML is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
If I run the dashboard, I get this:
Which looks like it should be telling me the external endpoint, but isn't. I have a theory that this is because the Yaml file is only deploying a Pod, which is in some way not able to expose an endpoint - is that the case and if so, why? Otherwise, how can I find this endpoint?
Thats not how that works, you need to read up on basic kubernetes concept. Pods are only container, to expose pods you need to create services (and you need labels), to expose pods externally you need to set service type to LoadBalancer. You probably want to use deployments instead of pods, its a lot easier\reliable.
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
so in short, you need to add labels to your pod and create a service of type load balancer with selectors that match your pods labels
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 443
type: LoadBalancer
I was following this kubernetes tutorial in order to set up a DNS service and connect together two separate kubernetes pods. The one, which should serve as a gateway, is listening on port 80, the other one on port 90.
When I use their Node IP, curl 10.32.0.24 and curl 10.32.0.25:90 I can reach them. Nevertheless I can't figure out, how to access them via my DNS service. What the URL will be?
The Namespace is default and this is the result of kubectl cluster-info:
Kubernetes master is running at IP_OF_MY_SERVER:6443
KubeDNS is running at IP_OF_MY_SERVER:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
My deployment.yaml is almost the same as in the tutorial:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 80
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: time-provider
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: gateway
name: busybox
The Kubernetes DNS service works inside a cluster and provide DNS names for pods, not for external services.
Here is an extract from the instruction you used:
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain. This is best illustrated by example:
Assume a Service named foo in the Kubernetes namespace bar. A Pod running in namespace bar can look up this service by simply doing a DNS query for foo. A Pod running in namespace quux can look up this service by doing a DNS query for foo.bar.
So, the DNS names of your resources inside a cluster exist only in it.
You call to the service from the external network by NodeIPs : curl 10.32.0.24 and curl 10.32.0.25:90. And that is a correct way. If you want to use a DNS names to connect to the cluster from outside, you should use any other DNS service to point names to your cluster nodes or LoadBalancer.
I recommend you to use Service object to expose your application. Here is a some articles about it: ways to connect, use a Service to access applications.