My external IP on Azure Kubernetes Service doesn't work - azure

I have created a kubernetes cluster on Azure. I have deployed some pods where there is no frontend (micro services).
I have performed tests locally using Postman and VS Code: these micro services return either 200 O` or 500.
The problem is that in Kubernetes I have the external IP correctly, but it is impossible for me to access from outside.
I have another Mongo container that I can access without problems.I leave some images to try to solve:
Can you help me? thanks!!

Kubernetes is a little bit more complex than simple docker containers, so it might get confusing to get it running at first. I will explain at which points you need to configure exposure of a service.
Each container has an own ip address space, so each container can use the same port for an application. In your case you might want to use port 6060. This is the port the application needs to bind to and on all network interfaces (ip 0.0.0.0) to be reachable from the outside. This is the port you would declare as EXPOSE in your dockerfile.
When testing locally you can map each container to a different local port for testing: docker run -p external-port:internal-port
The port you use for EXPOSE is the port you configure as containerPort in a Pod or Deployment.
One or many pods are exposed as load balanced service inside kubernetes using a Service. There you might want to map a request port - for http usually 80 - to the container port, in your case 6060.
The service can then be exposed externally using a LoadBalancer. The external IP of the LoadBalancer will be mapped to the (virtual IP) of your Service, the Service maps the request port to the container port and selects an appropriate pod using the selector. The pod contains a container listening on the container port and then replies to your request.
The whole chain must be configured correctly in order to get it working. Keeping it simple (not using different ports for each application) makes it easier to get right.

Did you try to hit restApi URIs like
ExternalIP:Port/uri
This should be accessible, I also use this approach with AKS

As I see from your question and the YAML file in your comment, the possible reason as I think is that you set the command in your deployment container, this command will overwrite the default command in the image. So I doubt maybe your application does not start, you can take a check.
I will also suggest you check if the port you expose to the outside is the same as the port in the image.

I have been able to lift one of the 4 microservices.
I have tried to lift the three remaining microservices with the same YAML (changing the image URL and the port) and these do not work.
The YAML used is this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: permissions
spec:
replicas: 1
template:
metadata:
labels:
app: permissions
spec:
containers:
- name: permissions
image: URL IMAGE
ports:
- containerPort: 6060
imagePullSecrets:
- name: nameimage
---
apiVersion: v1
kind: Service
metadata:
name: permissions
spec:
type: LoadBalancer
ports:
- port: 6060
selector:
app: permissions
I added this to set resource limits for the others Microservices:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: users
spec:
replicas: 1
template:
metadata:
labels:
app: users
spec:
containers:
- name: users
image: URL IMAGE
resources:
requests:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 6061
---
apiVersion: v1
kind: Service
metadata:
name: users
spec:
type: LoadBalancer
ports:
- port: 6061
selector:
app: users
As I said, I could only lift the first one.
Some help?
Thanks!

Related

Azure Kubernetes - How to determine DNS name that can be used for INTERNAL Load Balancer?

We have defined our internal Load Balancer.
apiVersion: v1
kind: Service
metadata:
name: ads-aks-test
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
selector:
app: ads-aks-test
It has its IP and External IP. We want to access this service from VM in another Virtual Network.
We need to know it's DNS name - fully qualified name in advance because we are deploying multiple applications from deployment platform and we want to know based on its Service Name how we can access it once it is being successfully deployed and not to wait for IP address to be determined (either manually or automatically). So for example that is our APP1, and after that automatically we install application APP2 which needs to reach this service.
So for that reason we would like to avoid using the IP information.
How we can determine what is the service "hostname" by which we will access it from the second application?
Only information in docs which I found is: "If your service is using a dynamic or static public IP address, you can use the service annotation service.beta.kubernetes.io/azure-dns-label-name to set a public-facing DNS label." - but this is for public load balancer which we do not want!
Set up ExternalDNS in your K8s cluster. Here is a guide for Azure Private DNS. This will allow you to update the DNS record for any hostname you pick for the service, dynamically via Kubernetes resources.
Sample config looks like this (excerpted from Azure Private DNS guide)
apiVersion: apps/v1
kind: Deployment
metadata:
name: externaldns
spec:
selector:
matchLabels:
app: externaldns
strategy:
type: Recreate
template:
metadata:
labels:
app: externaldns
spec:
containers:
- name: externaldns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --domain-filter=example.com
- --provider=azure-private-dns
- --azure-resource-group=externaldns
- --azure-subscription-id=<use the id of your subscription>
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
https://learn.microsoft.com/en-us/azure/aks/internal-lb
it seems you want this configuration? is there a peering? you also need to allow communication in NSG .
you can do kubectl get svc
and use the External IP of service ads-aks-test as in annotation you have mentioned "true" so it will be internal IP.
if you are looking forward to resolving the services name in the same cluster you can use the service name itself.
https://kubernetes.io/docs/concepts/services-networking/service/
you can do something like : your-svc.your-namespace.svc.cluster.local
note it will only work when services are in the same Kubernetes cluster.

Kubernetes Service does not map the right port

I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster.
Anyway here is the deployement I currently wrote :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001
And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that :
mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
Why is the 1883/9001 port aren't forwarded like it should be ?
First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the
pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.
I dont see anything wrong, ports you requested are being forwarded to. And service created temporary ports on nodes for traffic to flow (it always does that). Service got endpoints, everything is okay.
Just to give more context, it always does that, because it needs to route traffic to some port, but it cannot depend on this exact port, because it might be occupied, so its using random port from 30.000 range (by default).
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you must specify a known and static port assignment, you can add nodePort: some-number to your ports definition in your service. By default, nodeports are assigned in 30000-32767.

Finding the URL for an AKS Cluster

I've set-up an AKS cluster and am now trying to connect to it. My deployment YAML is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
If I run the dashboard, I get this:
Which looks like it should be telling me the external endpoint, but isn't. I have a theory that this is because the Yaml file is only deploying a Pod, which is in some way not able to expose an endpoint - is that the case and if so, why? Otherwise, how can I find this endpoint?
Thats not how that works, you need to read up on basic kubernetes concept. Pods are only container, to expose pods you need to create services (and you need labels), to expose pods externally you need to set service type to LoadBalancer. You probably want to use deployments instead of pods, its a lot easier\reliable.
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
so in short, you need to add labels to your pod and create a service of type load balancer with selectors that match your pods labels
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 443
type: LoadBalancer

External https on azure kubernetes managed service

I've managed to deploy a .netcore api to azure kubernetes managed service (ACS) and it's working as expected. The image is hosted in an azure container registry.
I'm now trying to get the service to be accessible via https. I'd like a very simple setup.
firstly, do I have to create an openssl cert or register with letencrypt? I'd ideally like to avoid having to manage ssl certs separately, but from documentation, it's not clear if this is required.
secondly, I've got a manifest file below. I can still access port 80 using this manifest. However, i am not able to access port 443. I don't see any errors, so it's not clear what the problem is. Any ideas?
thanks
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: someappservice-deployment
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
loadbalancer.openstack.org/floating-network-id: "9be23551-38e2-4d27-b5ea-ea2ea1321bd6"
spec:
replicas: 3
template:
metadata:
labels:
app: someappservices
spec:
containers:
- name: someappservices
image: myimage.azurecr.io/someappservices
ports:
- containerPort: 80
- containerPort: 443
---
kind: Service
apiVersion: v1
metadata:
name: external-http-someappservice
spec:
selector:
app: someappservices
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
From what I understand, you will need something like an NGINX ingress controller to handle the SSL termination and will also need to manage certificates. Kubernetes cert-manager is a nice package that can help with the certs.
Here is a write up on how to do both in an AKS cluster:
Deploy an HTTPS enabled ingress controller on AKS
If I do not misunderstand that you want to access your service via https with simple steps. Yes, If you don't have particularly strict security requirements such as SSL certs, you can just expose the ports to load balancer and access your service from the Internet, it's simple to configure.
The yaml file you posted looks all right. You can check from the Kubernetes dashboard and Azure portal, and the screenshot like this:
You also can check with the command kubectl get svc and the screenshot will like this:
But if you have particularly strict security requirements, you need nginx ingress controller like the answer in this case. Actually, the https is a network security protocol, you need to configure nginx ingress controller indeed.

Kubernetes/Azure ACS: Why can't I access external IPs of my Service?

Using Kubernetes on Azure Container Service (not the new AKS though).
I'm deploying a front-end up like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: etc/etc
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: frontend
I can see that it's started correctly from the logs.
From kubectl get services I can see that it has been assigned an External IP. But when I try to access that via HTTP it just hangs.
I also can see in the Azure Portal that the Azure Load Balancer was created and is pointing to the correct external IP and backend pool.
Can anyone tell me if I somehow messed up the port assignments in the pod definition?
--
Update: Somehow it started working on it's own (or seemed like). But when I tried to re-create it as a Service instead of Deployment it stopped working
Here's my Service:
This is my config:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: meteor
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: http-server
selector:
app: frontend
sessionAffinity: ClientIP
type: LoadBalancer
It creates the external IP for the load balancer, and I can see that it is properly matching the pods. but I get a timeout when I try to connect to the external IP. Meanwhile the load balancer that was created as part of the deployment continues to work just fine.
do you know how to change the agent VM size in an existing ACS
deployment?
We can change k8s agent via Azure portal, the agent in Azure is a VM, we should resize the VM :
Hope this helps.
It looks like the problem was a mis-specification of the targetPort. Adjusting it to the correct value and replacing the Service definition solved the problem.

Resources