I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster.
Anyway here is the deployement I currently wrote :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001
And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that :
mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
Why is the 1883/9001 port aren't forwarded like it should be ?
First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the
pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.
I dont see anything wrong, ports you requested are being forwarded to. And service created temporary ports on nodes for traffic to flow (it always does that). Service got endpoints, everything is okay.
Just to give more context, it always does that, because it needs to route traffic to some port, but it cannot depend on this exact port, because it might be occupied, so its using random port from 30.000 range (by default).
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you must specify a known and static port assignment, you can add nodePort: some-number to your ports definition in your service. By default, nodeports are assigned in 30000-32767.
Related
I have an Azure Kubernetes Service cluster, running version 1.15.7. This cluster recently replaced an older cluster version (1.12.something). In the past, once the various service pods were up and running, we would create a public IP resource in Azure portal and assign it a name, then create a Service resource like this:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
loadBalancerIP: 1.2.3.4
Finally, we'd add the public IP to a Traffic Manager instance.
Since upgrading to 1.15, this doesn't seem to work anymore. We can go through all the above steps, but as soon as the Service/Load Balancer is created, the public IP loses its DNS name, which causes it to be evicted from Traffic Manager. We can reset the name, but within 36-48 hours it gets lost again. My suspicion is that AKS is trying to apply a name to the associated IP address, but since I haven't defined one above, it just sets it to null.
How can I tell AKS what name to assign to a public IP? Better yet, can I skip the static public IP and let AKS provision a dynamic address and simply add the DNS name to Traffic Manager?
This is indeed a bug in AKS 1.15.7
Azure - PIP dns label will be default deleted
The upshot is, this is part of a new feature in 1.15 that allows the DNS label for a LoadBalancer IP to be set in the Service configuration. So, the definition above can become:
apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
annotations:
service.beta.kubernetes.io/azure-dns-label-name: myservice-frontend
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
And the service will be automatically assigned a new static IP with the annotated DNS name.
I have created a kubernetes cluster on Azure. I have deployed some pods where there is no frontend (micro services).
I have performed tests locally using Postman and VS Code: these micro services return either 200 O` or 500.
The problem is that in Kubernetes I have the external IP correctly, but it is impossible for me to access from outside.
I have another Mongo container that I can access without problems.I leave some images to try to solve:
Can you help me? thanks!!
Kubernetes is a little bit more complex than simple docker containers, so it might get confusing to get it running at first. I will explain at which points you need to configure exposure of a service.
Each container has an own ip address space, so each container can use the same port for an application. In your case you might want to use port 6060. This is the port the application needs to bind to and on all network interfaces (ip 0.0.0.0) to be reachable from the outside. This is the port you would declare as EXPOSE in your dockerfile.
When testing locally you can map each container to a different local port for testing: docker run -p external-port:internal-port
The port you use for EXPOSE is the port you configure as containerPort in a Pod or Deployment.
One or many pods are exposed as load balanced service inside kubernetes using a Service. There you might want to map a request port - for http usually 80 - to the container port, in your case 6060.
The service can then be exposed externally using a LoadBalancer. The external IP of the LoadBalancer will be mapped to the (virtual IP) of your Service, the Service maps the request port to the container port and selects an appropriate pod using the selector. The pod contains a container listening on the container port and then replies to your request.
The whole chain must be configured correctly in order to get it working. Keeping it simple (not using different ports for each application) makes it easier to get right.
Did you try to hit restApi URIs like
ExternalIP:Port/uri
This should be accessible, I also use this approach with AKS
As I see from your question and the YAML file in your comment, the possible reason as I think is that you set the command in your deployment container, this command will overwrite the default command in the image. So I doubt maybe your application does not start, you can take a check.
I will also suggest you check if the port you expose to the outside is the same as the port in the image.
I have been able to lift one of the 4 microservices.
I have tried to lift the three remaining microservices with the same YAML (changing the image URL and the port) and these do not work.
The YAML used is this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: permissions
spec:
replicas: 1
template:
metadata:
labels:
app: permissions
spec:
containers:
- name: permissions
image: URL IMAGE
ports:
- containerPort: 6060
imagePullSecrets:
- name: nameimage
---
apiVersion: v1
kind: Service
metadata:
name: permissions
spec:
type: LoadBalancer
ports:
- port: 6060
selector:
app: permissions
I added this to set resource limits for the others Microservices:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: users
spec:
replicas: 1
template:
metadata:
labels:
app: users
spec:
containers:
- name: users
image: URL IMAGE
resources:
requests:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 6061
---
apiVersion: v1
kind: Service
metadata:
name: users
spec:
type: LoadBalancer
ports:
- port: 6061
selector:
app: users
As I said, I could only lift the first one.
Some help?
Thanks!
I've managed to deploy a .netcore api to azure kubernetes managed service (ACS) and it's working as expected. The image is hosted in an azure container registry.
I'm now trying to get the service to be accessible via https. I'd like a very simple setup.
firstly, do I have to create an openssl cert or register with letencrypt? I'd ideally like to avoid having to manage ssl certs separately, but from documentation, it's not clear if this is required.
secondly, I've got a manifest file below. I can still access port 80 using this manifest. However, i am not able to access port 443. I don't see any errors, so it's not clear what the problem is. Any ideas?
thanks
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: someappservice-deployment
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
loadbalancer.openstack.org/floating-network-id: "9be23551-38e2-4d27-b5ea-ea2ea1321bd6"
spec:
replicas: 3
template:
metadata:
labels:
app: someappservices
spec:
containers:
- name: someappservices
image: myimage.azurecr.io/someappservices
ports:
- containerPort: 80
- containerPort: 443
---
kind: Service
apiVersion: v1
metadata:
name: external-http-someappservice
spec:
selector:
app: someappservices
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
From what I understand, you will need something like an NGINX ingress controller to handle the SSL termination and will also need to manage certificates. Kubernetes cert-manager is a nice package that can help with the certs.
Here is a write up on how to do both in an AKS cluster:
Deploy an HTTPS enabled ingress controller on AKS
If I do not misunderstand that you want to access your service via https with simple steps. Yes, If you don't have particularly strict security requirements such as SSL certs, you can just expose the ports to load balancer and access your service from the Internet, it's simple to configure.
The yaml file you posted looks all right. You can check from the Kubernetes dashboard and Azure portal, and the screenshot like this:
You also can check with the command kubectl get svc and the screenshot will like this:
But if you have particularly strict security requirements, you need nginx ingress controller like the answer in this case. Actually, the https is a network security protocol, you need to configure nginx ingress controller indeed.
Using Kubernetes on Azure Container Service (not the new AKS though).
I'm deploying a front-end up like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: etc/etc
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: frontend
I can see that it's started correctly from the logs.
From kubectl get services I can see that it has been assigned an External IP. But when I try to access that via HTTP it just hangs.
I also can see in the Azure Portal that the Azure Load Balancer was created and is pointing to the correct external IP and backend pool.
Can anyone tell me if I somehow messed up the port assignments in the pod definition?
--
Update: Somehow it started working on it's own (or seemed like). But when I tried to re-create it as a Service instead of Deployment it stopped working
Here's my Service:
This is my config:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: meteor
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: http-server
selector:
app: frontend
sessionAffinity: ClientIP
type: LoadBalancer
It creates the external IP for the load balancer, and I can see that it is properly matching the pods. but I get a timeout when I try to connect to the external IP. Meanwhile the load balancer that was created as part of the deployment continues to work just fine.
do you know how to change the agent VM size in an existing ACS
deployment?
We can change k8s agent via Azure portal, the agent in Azure is a VM, we should resize the VM :
Hope this helps.
It looks like the problem was a mis-specification of the targetPort. Adjusting it to the correct value and replacing the Service definition solved the problem.
I'm trying to create two kubernetes services, one which is a loadbalancer with a cluster IP, and another which is a headless (no cluster IP), but instead returns an A record round robin collection of the pod ip addresses (as it should do, according to http://kubernetes.io/docs/user-guide/services/#headless-services).
I need to do this because I need a dynamic collection of pod ip's in order to do auto clustering and service discovery.
My services look like this:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
tier: messaging
spec:
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rabbitmq
tier: messaging
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-cluster
labels:
app: rabbitmq
tier: messaging
spec:
clusterIP: None
ports:
- name: amqp
port: 5672
targetPort: 5672
selector:
app: rabbitmq
tier: messaging
With these two services, i get the following:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq 10.23.255.174 <none> 5672/TCP 7m
rabbitmq-cluster None <none> 5672/TCP 7m
And DNS (from another pod) for the cluster IP works:
[root#gateway-3738159135-a7wp9 app]# nslookup rabbitmq.td-integration
Server: 10.23.240.10
Address: 10.23.240.10#53
Name: rabbitmq.td-integration.svc.cluster.local
Address: 10.23.255.174
However, the dns for the 'headless' service, doesn't return:
[root#gateway-3738159135-a7wp9 app]# nslookup rabbitmq-cluster.td-integration
Server: 10.23.240.10
Address: 10.23.240.10#53
** server can't find rabbitmq-cluster.td-integration: NXDOMAIN
It seems like there is no pod matching these labels within your cluster, therefore the DNS query doesn't return anything. This is expected.
Start the corresponding pods and you should see a list of A records.
Please be aware that these A records are not shuffled as far as I know, so your clients are expected to consume the DNS answer and perform their own round robin.