I installed nginx-ingress(Below Command) in AKS service and the public IP is visible in kubernetes service wizard. But unable to access public IP
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
And when I try to watch out for Load balancer, weird thing is I do not see any load balancer in Load balancer wizard but public IP is shown in Kubernetes service.
It was working few days back and I tried re installing nginx-ingress. Now it is not working as expected. Kind of stuck here and help would be appreciated.
I'm not sure if you add the namespace when you execute the command to get the details of the services. I show the command here:
kubectl get svc --namespace ingress-nginx
And if it's the thing you did, then you need to check more things, such as the events that the services showed. Maybe the service is in the pending state or something wrong. You find certain error messages, then you'll also know what you will do.
Related
I have created a sample API application with Node and Express to be containerized and deployed into Azure Kubernetes Services (AKS). However, I was unable to access the API endpoint through the external API generated from the service.yml that was deployed.
I have made use of deployment center within AKS to deploy my application to AKS and generate the relevant deployment.yml and service.yml. The following is the services running containing the external IP.
The following is the response from postman. I have tried with or without port number and ip address from kubectl get endpoints but to no avail. The request will timeout eventually and unable to access the api.
The following is the dockerfile config
I have tried searching around for solutions, how it was not possible to resolve. I would greatly appreciate if you have encountered similar issues and able to share your experience, thank you.
From client machine where kubectl is installed do
kubectl get pods -o wide -n restapicluster5ca2
this will give you all the pods with the ip of the Pods
kubectl describe svc restapicluster-bb91 -n restapicluster5ca2
this will give details about the service and then check LoadBalancer Ingress: for the external IP address, Port: for the port to access, TargetPort: the port on the containers to access i.e 5000 in your case, Endpoints: to verify if all IP of the pod with correct port i.e 5000 is displaying or not.
Log into any of the machines in the AKS cluster do the following
curl [CLUSTER-IP]:[PORT]/api/posts i.e curl 10.-.-.67:5000
check if you get the response.
For reference to use kubectl locally with AKS cluster check the links below
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
As I see it you need to bring ingress in front of your service. Folks use NgInx etc. for that.
If you want to stay "pure" Azure you could use AGIC - Application Gateway Ingress Controller and annotate your service to have it exposed over AppGw. You could also spin up your own custom AppGw and hook it up with the AKS Service/LoadBalancer IP.
I am able to create aks cluster with advance networking. able to integrate application load balancer also with this AKS cluster but i am unable to find any way to integrate azure api gateway with aks.
Using Application Gateway as an Ingress controller for AKS is in a beta state at the moment (as shown on the Github page - https://github.com/Azure/application-gateway-kubernetes-ingress) and so I don't believe there will be any support for setting it up with Terraform until it gets to GA.
You might be able to do something with exec resources to set it up, but that would be up to you to figure out.
Unfortunately, it seems there is no way to integrate the application load balancer with the AKS cluster directly. And you can see all the things you can set for AKS here.
But you can integrate the application load balancer with AKS cluster when you take knowledge of AKS internal load balancer and Application gateway backend pool addresses. You can take a look at the steps that how to integrate application gateway with AKS cluster.
First of all, you need to make a plan for the AKS cluster network and take an exact IP address for the application gateway backend pool address in the Terraform. Hope this will help you if there any more question you can give me the message.
Im trying to get web services to my existing service from aks managed cluster on azure. I did nsg port config stuff from portal to let outbound traffic go out and restarted vm several times. But my node cannot ping any ping on the internet. Im not trying to ping somewhere with its fqdn. Im trying it with its ip address. How can i reach a service from internet into my cluster?
How did you create the service and pod? Be default load balancer one will create all the ruls for you and you dont need to create the rules by yourself.
You can share your pod details
So I am fairly new to Kubernetes. I am a Windows user (sorry) and have installed Minikube. I am trying to learn Kubenetes using MiniKube. I have created very simple REST API that should work with port 5000 exposed where there is a simple route /Hello/{somestring}
I have created a POD/Deployment and Service for this successfully in MiniKube like this
minikube.exe start --kubernetes-version="v1.9.0" --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="Minikube Switch" --v=7 --alsologtostderr
kubectl run simple-sswebapi-pod-v1 --replicas=1 --labels="run=sswebapi-pod-v1" --image=sachabarber/sswebapp:v1 --port=5000
kubectl expose deployment simple-sswebapi-pod-v1 --type=NodePort --name=simple-sswebapi-service
kubectl get services simple-sswebapi-service
Which I can then grab the url from and paste into my browser like so
minikube service simple-sswebapi-service --url
Which gives me this URL
http://192.168.0.29:32246
Which I then try in the browser on my host, all is good my REST API is running as expected
But from what I have read, I believe I should be able to ALSO use a DNS name for the service rather than this url returned above.
In fact I am not sure what this IP address returned as part of the --url command is trying to tell me above. It is not one of the ones listed for the service endpoints for is it for the POD from what I can tell from the Dashboard.
This is the service
This is the POD
Shouldn't there be a DNS name available for the service that I should be able to use instead of this fairly hacky way of grabbing the url from the service I just created. Someone please let me know what this --url even represents. I am lost here
I have checked that the DNS add on is enabled in MiniKube it is, see kube-dns in list below
As I say this is also what I see for the service inside of the MiniKube Dashboard
This confused me even more as I cant seem to tie any of that back to the ONLY IP address that seems to actually work for me, which is the one I grabbed using this line from the service
.\minikube.exe service simple-sswebapi-service --url
This Ip Address is not shown in the dashboard at all.
I thought the service should be available at DNS name something like:
simple-sswebapi-service.default.svc.cluster.local
Which is the
The name of the service
The namespace
svc to tell its a service
Just for completeness this is me describing the service in command line
What am I missing?
Is my mental mode wrong. I should be able to see this service using a DNS in the host too? Or is the DNS name ONLY available inside the PODS?
kube-dns is internal DNS. You can only use the DNS name for a service from inside the cluster.
Since your service type is Nodeport, you can connect to the service using the IP of the machine (minikube) on that port.
First of all, I am not very expert of K8s, I understand some of the concepts and made already my hands dirty in the configurations.
I correctly set up the cluster configured by my company but I have this issue
I am working on a cluster with 2 pods, ingress rules are correctly configured for www.my-app.com and dashboard.my-app.com.
Both pods runs on the same VM.
If I enter in the dashboard pod (kubectl exec -it $POD bash) and try to curl http://www.my-app.com I land on the dashboard pod again (the same happens all the way around, from www to dashboard).
I have to use http://www-svc.default.svc.cluster.local and http://dashboard-svc.default.svc.cluster.local to land on the correct pods but this is a problem (links generated by the other app will contain internal k8s host, instead of the "public url").
Is there a way to configure routing so I can access pods with their "public" hostnames, from the pods themselves?
So what should happen when you curl is the external DNS record (www.my-app.com in this case) will resolve to your external IP address, usually a load balancer that then sends traffic to a kubernetes service. That service then should send traffic to the appropriate pod. It would seem that you have a misconfigured service. Make sure your service has an external IP that is different between dashboard and www. To see this a simple kubectl get svc should suffice. My guess is that the external IP is wrong, or the service is pointing to the wrong podm which you can see with a kubectl describe svc <name of service>.