I've set-up an AKS cluster and am now trying to connect to it. My deployment YAML is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
If I run the dashboard, I get this:
Which looks like it should be telling me the external endpoint, but isn't. I have a theory that this is because the Yaml file is only deploying a Pod, which is in some way not able to expose an endpoint - is that the case and if so, why? Otherwise, how can I find this endpoint?
Thats not how that works, you need to read up on basic kubernetes concept. Pods are only container, to expose pods you need to create services (and you need labels), to expose pods externally you need to set service type to LoadBalancer. You probably want to use deployments instead of pods, its a lot easier\reliable.
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
so in short, you need to add labels to your pod and create a service of type load balancer with selectors that match your pods labels
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 443
type: LoadBalancer
Related
We have defined our internal Load Balancer.
apiVersion: v1
kind: Service
metadata:
name: ads-aks-test
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
selector:
app: ads-aks-test
It has its IP and External IP. We want to access this service from VM in another Virtual Network.
We need to know it's DNS name - fully qualified name in advance because we are deploying multiple applications from deployment platform and we want to know based on its Service Name how we can access it once it is being successfully deployed and not to wait for IP address to be determined (either manually or automatically). So for example that is our APP1, and after that automatically we install application APP2 which needs to reach this service.
So for that reason we would like to avoid using the IP information.
How we can determine what is the service "hostname" by which we will access it from the second application?
Only information in docs which I found is: "If your service is using a dynamic or static public IP address, you can use the service annotation service.beta.kubernetes.io/azure-dns-label-name to set a public-facing DNS label." - but this is for public load balancer which we do not want!
Set up ExternalDNS in your K8s cluster. Here is a guide for Azure Private DNS. This will allow you to update the DNS record for any hostname you pick for the service, dynamically via Kubernetes resources.
Sample config looks like this (excerpted from Azure Private DNS guide)
apiVersion: apps/v1
kind: Deployment
metadata:
name: externaldns
spec:
selector:
matchLabels:
app: externaldns
strategy:
type: Recreate
template:
metadata:
labels:
app: externaldns
spec:
containers:
- name: externaldns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --domain-filter=example.com
- --provider=azure-private-dns
- --azure-resource-group=externaldns
- --azure-subscription-id=<use the id of your subscription>
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.
https://learn.microsoft.com/en-us/azure/aks/internal-lb
it seems you want this configuration? is there a peering? you also need to allow communication in NSG .
you can do kubectl get svc
and use the External IP of service ads-aks-test as in annotation you have mentioned "true" so it will be internal IP.
if you are looking forward to resolving the services name in the same cluster you can use the service name itself.
https://kubernetes.io/docs/concepts/services-networking/service/
you can do something like : your-svc.your-namespace.svc.cluster.local
note it will only work when services are in the same Kubernetes cluster.
I have created a kubernetes cluster on Azure. I have deployed some pods where there is no frontend (micro services).
I have performed tests locally using Postman and VS Code: these micro services return either 200 O` or 500.
The problem is that in Kubernetes I have the external IP correctly, but it is impossible for me to access from outside.
I have another Mongo container that I can access without problems.I leave some images to try to solve:
Can you help me? thanks!!
Kubernetes is a little bit more complex than simple docker containers, so it might get confusing to get it running at first. I will explain at which points you need to configure exposure of a service.
Each container has an own ip address space, so each container can use the same port for an application. In your case you might want to use port 6060. This is the port the application needs to bind to and on all network interfaces (ip 0.0.0.0) to be reachable from the outside. This is the port you would declare as EXPOSE in your dockerfile.
When testing locally you can map each container to a different local port for testing: docker run -p external-port:internal-port
The port you use for EXPOSE is the port you configure as containerPort in a Pod or Deployment.
One or many pods are exposed as load balanced service inside kubernetes using a Service. There you might want to map a request port - for http usually 80 - to the container port, in your case 6060.
The service can then be exposed externally using a LoadBalancer. The external IP of the LoadBalancer will be mapped to the (virtual IP) of your Service, the Service maps the request port to the container port and selects an appropriate pod using the selector. The pod contains a container listening on the container port and then replies to your request.
The whole chain must be configured correctly in order to get it working. Keeping it simple (not using different ports for each application) makes it easier to get right.
Did you try to hit restApi URIs like
ExternalIP:Port/uri
This should be accessible, I also use this approach with AKS
As I see from your question and the YAML file in your comment, the possible reason as I think is that you set the command in your deployment container, this command will overwrite the default command in the image. So I doubt maybe your application does not start, you can take a check.
I will also suggest you check if the port you expose to the outside is the same as the port in the image.
I have been able to lift one of the 4 microservices.
I have tried to lift the three remaining microservices with the same YAML (changing the image URL and the port) and these do not work.
The YAML used is this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: permissions
spec:
replicas: 1
template:
metadata:
labels:
app: permissions
spec:
containers:
- name: permissions
image: URL IMAGE
ports:
- containerPort: 6060
imagePullSecrets:
- name: nameimage
---
apiVersion: v1
kind: Service
metadata:
name: permissions
spec:
type: LoadBalancer
ports:
- port: 6060
selector:
app: permissions
I added this to set resource limits for the others Microservices:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: users
spec:
replicas: 1
template:
metadata:
labels:
app: users
spec:
containers:
- name: users
image: URL IMAGE
resources:
requests:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 6061
---
apiVersion: v1
kind: Service
metadata:
name: users
spec:
type: LoadBalancer
ports:
- port: 6061
selector:
app: users
As I said, I could only lift the first one.
Some help?
Thanks!
I have to create a Kubernetes cluster in MS Azure manually, not using AKS. So:
I've created 2 VM's in one Availability set: one for k8s master and second for k8s node.
I've created External Load Balancer and add 2 VM's to the backend pool.
I've created k8s cluster using kubespray.
I've created Deployment and LoadBalancer Service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wrapper
spec:
replicas: 2
template:
metadata:
labels:
app: wrapper
spec:
containers:
- name: wrapper
image: wrapper:latest
ports:
- containerPort: 8080
name: wrapper
---
apiVersion: v1
kind: Service
metadata:
name: wrapper
spec:
loadBalancerIP: <azure_loadbalancer_public_ip>
type: LoadBalancer
ports:
- port: 8080
selector:
app: wrapper
But LoadBalancer service External-IP is always pending:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP
wrapper LoadBalancer 10.233.38.7 <pending>
Also, telnet azure_loadbalancer_public_ip doesn't work. I've tried to use NodePort instead of LoadBalancer, but in that case, I have two endpoints for my service on k8s master and on k8s node.
What I want is one entrypoint: azure_loadbalancer_public_ip, that is balances traffic between all nodes in the cluster.
Could you please help me to understand what I'm doing wrong and is it possible to "bind" Azure External Load Balancer with LoadBalancer service in Kubernetes?
You dont have to do that, k8s (when its configured properly) handles that for you. All you have to do it give it proper rights to be able to create a load balancer in Azure.
It basically can't talk to the Azure API to create a Load Balancer. You basically need to:
Add this option: --cloud-provider=azure to your kube-apiserver, kube-controller-manager and all the kubelets running on your nodes.
Make sure that your Azure VM has access to the Azure API
Restart all the components from 1.
This is not needed if you have the Cloud Controller Manager installed which is Beta in K8s 1.12 as of this writing. Note that the --cloud-provider option will be deprecated at some point in favor of this.
I was following this kubernetes tutorial in order to set up a DNS service and connect together two separate kubernetes pods. The one, which should serve as a gateway, is listening on port 80, the other one on port 90.
When I use their Node IP, curl 10.32.0.24 and curl 10.32.0.25:90 I can reach them. Nevertheless I can't figure out, how to access them via my DNS service. What the URL will be?
The Namespace is default and this is the result of kubectl cluster-info:
Kubernetes master is running at IP_OF_MY_SERVER:6443
KubeDNS is running at IP_OF_MY_SERVER:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
My deployment.yaml is almost the same as in the tutorial:
apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo # Actually, no port is needed.
port: 80
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: busybox1
labels:
name: busybox
spec:
hostname: busybox-1
subdomain: default-subdomain
containers:
- image: time-provider
name: busybox
---
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
name: busybox
spec:
hostname: busybox-2
subdomain: default-subdomain
containers:
- image: gateway
name: busybox
The Kubernetes DNS service works inside a cluster and provide DNS names for pods, not for external services.
Here is an extract from the instruction you used:
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain. This is best illustrated by example:
Assume a Service named foo in the Kubernetes namespace bar. A Pod running in namespace bar can look up this service by simply doing a DNS query for foo. A Pod running in namespace quux can look up this service by doing a DNS query for foo.bar.
So, the DNS names of your resources inside a cluster exist only in it.
You call to the service from the external network by NodeIPs : curl 10.32.0.24 and curl 10.32.0.25:90. And that is a correct way. If you want to use a DNS names to connect to the cluster from outside, you should use any other DNS service to point names to your cluster nodes or LoadBalancer.
I recommend you to use Service object to expose your application. Here is a some articles about it: ways to connect, use a Service to access applications.
Using Kubernetes on Azure Container Service (not the new AKS though).
I'm deploying a front-end up like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: etc/etc
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: frontend
I can see that it's started correctly from the logs.
From kubectl get services I can see that it has been assigned an External IP. But when I try to access that via HTTP it just hangs.
I also can see in the Azure Portal that the Azure Load Balancer was created and is pointing to the correct external IP and backend pool.
Can anyone tell me if I somehow messed up the port assignments in the pod definition?
--
Update: Somehow it started working on it's own (or seemed like). But when I tried to re-create it as a Service instead of Deployment it stopped working
Here's my Service:
This is my config:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: meteor
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: http-server
selector:
app: frontend
sessionAffinity: ClientIP
type: LoadBalancer
It creates the external IP for the load balancer, and I can see that it is properly matching the pods. but I get a timeout when I try to connect to the external IP. Meanwhile the load balancer that was created as part of the deployment continues to work just fine.
do you know how to change the agent VM size in an existing ACS
deployment?
We can change k8s agent via Azure portal, the agent in Azure is a VM, we should resize the VM :
Hope this helps.
It looks like the problem was a mis-specification of the targetPort. Adjusting it to the correct value and replacing the Service definition solved the problem.