First of all, I am not very expert of K8s, I understand some of the concepts and made already my hands dirty in the configurations.
I correctly set up the cluster configured by my company but I have this issue
I am working on a cluster with 2 pods, ingress rules are correctly configured for www.my-app.com and dashboard.my-app.com.
Both pods runs on the same VM.
If I enter in the dashboard pod (kubectl exec -it $POD bash) and try to curl http://www.my-app.com I land on the dashboard pod again (the same happens all the way around, from www to dashboard).
I have to use http://www-svc.default.svc.cluster.local and http://dashboard-svc.default.svc.cluster.local to land on the correct pods but this is a problem (links generated by the other app will contain internal k8s host, instead of the "public url").
Is there a way to configure routing so I can access pods with their "public" hostnames, from the pods themselves?
So what should happen when you curl is the external DNS record (www.my-app.com in this case) will resolve to your external IP address, usually a load balancer that then sends traffic to a kubernetes service. That service then should send traffic to the appropriate pod. It would seem that you have a misconfigured service. Make sure your service has an external IP that is different between dashboard and www. To see this a simple kubectl get svc should suffice. My guess is that the external IP is wrong, or the service is pointing to the wrong podm which you can see with a kubectl describe svc <name of service>.
Related
I have created a sample API application with Node and Express to be containerized and deployed into Azure Kubernetes Services (AKS). However, I was unable to access the API endpoint through the external API generated from the service.yml that was deployed.
I have made use of deployment center within AKS to deploy my application to AKS and generate the relevant deployment.yml and service.yml. The following is the services running containing the external IP.
The following is the response from postman. I have tried with or without port number and ip address from kubectl get endpoints but to no avail. The request will timeout eventually and unable to access the api.
The following is the dockerfile config
I have tried searching around for solutions, how it was not possible to resolve. I would greatly appreciate if you have encountered similar issues and able to share your experience, thank you.
From client machine where kubectl is installed do
kubectl get pods -o wide -n restapicluster5ca2
this will give you all the pods with the ip of the Pods
kubectl describe svc restapicluster-bb91 -n restapicluster5ca2
this will give details about the service and then check LoadBalancer Ingress: for the external IP address, Port: for the port to access, TargetPort: the port on the containers to access i.e 5000 in your case, Endpoints: to verify if all IP of the pod with correct port i.e 5000 is displaying or not.
Log into any of the machines in the AKS cluster do the following
curl [CLUSTER-IP]:[PORT]/api/posts i.e curl 10.-.-.67:5000
check if you get the response.
For reference to use kubectl locally with AKS cluster check the links below
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
As I see it you need to bring ingress in front of your service. Folks use NgInx etc. for that.
If you want to stay "pure" Azure you could use AGIC - Application Gateway Ingress Controller and annotate your service to have it exposed over AppGw. You could also spin up your own custom AppGw and hook it up with the AKS Service/LoadBalancer IP.
I'm using Azure Kubernetes Service and have a unique scenario where I want to allow only one connection per pod. I used the "advanced" networking option to set up my cluster such that each pod has its own internal IP address. The problem is, all of these pods are behind a public load balancer IP address, and the load balancer decides where to route the traffic.
I need to either A) set up a rule such that the load balancer only allows one connection per pod and routes new traffic to new pods, 1 per request, or B) set up an ingress controller to do the same. I think B) is the solution but I have no clear path on how to do this. I see that you can route by URL, but you'd have to set up a rule for each pod, which is definitely not a good idea. Is there any way to set up a rule that just limits 1 session per pod? Or some other method that works similarly.
Thanks.
This is a very good question. Based on solutions you suggested in the second part of your question, I would like to add my input here. However, it's not limited or possible only to use these, there are most effective advanced ways people are establishing connections to their pods.
A.) I am looking at how are you routing your traffic to your pods from a load balancer, in general each pod inside Kubernetes cluster by defaults get's its own ip. If we know this how you managing traffic flow from external world to each pod. I can add my answer to A part of possible solutions. But not advisable to go this method, because it is more likely your pod dies and a new pod with new ip might get created you need to manually route traffic to the newly created pod, which is why people opted for kubernetes rather than manually managing docker containers on a VM. But I might be wrong, you might be having different complex system it is debatable though.
B.) Like you said, and researched Ingress and Services is also a solution, unfortunately there are no ingress controller annotations available as of now that only limits one connection per pod, but like you said URL based would be one part of the solution but again as you already identified there will be a overhead with this way it is more like single service per single pod and a sub domain for each service. It is more like single deployment with a unique service associated with it and a unique service with unique subdomain. It's a complex setting but doable.
Edit Based on Comments (Removed HPA)
Based on the information you added I can suggest a different approach, but it is kinda wrong way of using kubernetes, but again it is debatable based on the kind of system you are planning to achieve. Run a proxy server (HAProxy, NGINX, or your fav) on it is own on one of the node and route traffic from the outside world to your pod directly with the internal ip of the pod in your proxy. And you can route based on number of connections, etc from the proxy config remember this is not your kubernetes pod, it's a standalone service your OS running. But caution when node dies pod dies, so is the ip address of the pod.
But this is something we shouldn't do, I am sure in couple of weeks or so you will get the bigger picture of K8s and it's moving parts, you might say this is wrong as there is lot of manual setup overhead.
Hope this is helpful.
I'm fairly new to the k8s world, but as I understand it you should be able to do this with the nginx.org/max-conns annotation in a Nginx Ingress Controller:
https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
That way you should be able to limit the number of connections to 1 per 'upstream' or pod.
I.e. the Load Balancer directs traffic to Nginx, Nginx proxies the traffic to pods with one concurrent request per pod.
So I am fairly new to Kubernetes. I am a Windows user (sorry) and have installed Minikube. I am trying to learn Kubenetes using MiniKube. I have created very simple REST API that should work with port 5000 exposed where there is a simple route /Hello/{somestring}
I have created a POD/Deployment and Service for this successfully in MiniKube like this
minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr
kubectl run simple-sswebapi-pod-v1 --replicas=1 --labels="run=sswebapi-pod-v1" --image=sachabarber/sswebapp:v1 --port=5000
kubectl expose deployment simple-sswebapi-pod-v1 --type=NodePort --name=simple-sswebapi-service
kubectl get services simple-sswebapi-service
Which I can then grab the url from and paste into my browser like so
minikube service simple-sswebapi-service --url
Which gives me this URL
http://192.168.0.29:32246
Which I then try in the browser on my host, all is good my REST API is running as expected
But from what I have read, I believe I should be able to ALSO use a DNS name for the service rather than this url returned above.
In fact I am not sure what this IP address returned as part of the --url command is trying to tell me above. It is not one of the ones listed for the service endpoints for is it for the POD from what I can tell from the Dashboard.
This is the service
This is the POD
Shouldn't there be a DNS name available for the service that I should be able to use instead of this fairly hacky way of grabbing the url from the service I just created. Someone please let me know what this --url even represents. I am lost here
I have checked that the DNS add on is enabled in MiniKube it is, see kube-dns in list below
As I say this is also what I see for the service inside of the MiniKube Dashboard
This confused me even more as I cant seem to tie any of that back to the ONLY IP address that seems to actually work for me, which is the one I grabbed using this line from the service
.\minikube.exe service simple-sswebapi-service --url
This Ip Address is not shown in the dashboard at all.
I thought the service should be available at DNS name something like:
simple-sswebapi-service.default.svc.cluster.local
Which is the
The name of the service
The namespace
svc to tell its a service
Just for completeness this is me describing the service in command line
What am I missing?
Is my mental mode wrong. I should be able to see this service using a DNS in the host too? Or is the DNS name ONLY available inside the PODS?
kube-dns is internal DNS. You can only use the DNS name for a service from inside the cluster.
Since your service type is Nodeport, you can connect to the service using the IP of the machine (minikube) on that port.
How to add multiple ingress or Load balancers in kubernetes for separate services,
here is the post who I ended up creating a ingress to my sub-domain. Is there any way we can specific the same IP address created by GCE to launch multiple Ingress resources.
I am using GCE for hosting my cluster. If there is a better way to handle this scenario to have multiple resources to expose a service with a sub-domain www.app1.domain.com, www.app2.domain.com which are entirely different apps and have two ingress resources that point to two these specific services using same external IP address.
From the post I could able to create but unable to specify the external IP address to it.
Any help is much appreciated, thank you.
You can just define multiple Ingress resources and put them to Kubernetes - they don't have to be in the same yaml file. All ingress resources share the same proxy and they are routed via the defined hostname and path to the wanted service.
I am not sure what you mean with the external IP address.
I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.